Test Report: Docker_Linux_crio 22230

                    
                      c636a8658fdd5cfdd18416b9a30087c97060a836:2025-12-19:42856
                    
                

Test fail (36/415)

Order failed test Duration
38 TestAddons/serial/Volcano 0.27
44 TestAddons/parallel/Registry 13.47
45 TestAddons/parallel/RegistryCreds 0.42
46 TestAddons/parallel/Ingress 146.42
47 TestAddons/parallel/InspektorGadget 5.25
48 TestAddons/parallel/MetricsServer 5.31
50 TestAddons/parallel/CSI 31.09
51 TestAddons/parallel/Headlamp 2.61
52 TestAddons/parallel/CloudSpanner 5.26
53 TestAddons/parallel/LocalPath 8.14
54 TestAddons/parallel/NvidiaDevicePlugin 5.26
55 TestAddons/parallel/Yakd 5.25
56 TestAddons/parallel/AmdGpuDevicePlugin 5.27
99 TestFunctional/parallel/DashboardCmd 17.13
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 19.87
294 TestJSONOutput/pause/Command 2.05
300 TestJSONOutput/unpause/Command 1.47
364 TestPause/serial/Pause 6.88
450 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.23
451 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.71
462 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.53
464 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.99
468 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 542.34
469 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.3
472 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.35
473 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.32
474 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 542.42
475 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 542.53
476 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 542.37
477 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 542.51
479 TestStartStop/group/old-k8s-version/serial/Pause 6.58
481 TestStartStop/group/no-preload/serial/Pause 6.04
485 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.18
488 TestStartStop/group/embed-certs/serial/Pause 5.81
490 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.74
496 TestStartStop/group/newest-cni/serial/Pause 6.35
x
+
TestAddons/serial/Volcano (0.27s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-791857 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-791857 addons disable volcano --alsologtostderr -v=1: exit status 11 (265.368737ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:26:37.805414   18217 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:26:37.805753   18217 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:26:37.805763   18217 out.go:374] Setting ErrFile to fd 2...
	I1219 02:26:37.805768   18217 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:26:37.805961   18217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:26:37.806217   18217 mustload.go:66] Loading cluster: addons-791857
	I1219 02:26:37.806530   18217 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:26:37.806553   18217 addons.go:638] checking whether the cluster is paused
	I1219 02:26:37.806668   18217 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:26:37.806683   18217 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:26:37.807059   18217 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:26:37.826202   18217 ssh_runner.go:195] Run: systemctl --version
	I1219 02:26:37.826263   18217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:26:37.847163   18217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:26:37.951281   18217 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 02:26:37.951368   18217 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 02:26:37.984070   18217 cri.go:92] found id: "e7ab741310c7159227dc4626e72a83ac3b188e511b198b4eecbe1fbfa72a6a1d"
	I1219 02:26:37.984108   18217 cri.go:92] found id: "96a4c77bc9411da812fbbdb05f10e7f4626e9117b182160f7f703c6ca69da438"
	I1219 02:26:37.984114   18217 cri.go:92] found id: "3b4d17ba4256243dc128f083e9b674110dcf80d83f680893f8d1560a63f34ce8"
	I1219 02:26:37.984119   18217 cri.go:92] found id: "3da71007b4d24d06267e77306cc089ffb24dd26395c3cf25a087bebddec346df"
	I1219 02:26:37.984123   18217 cri.go:92] found id: "464d91d87ed3b2836ea8bd02787437d0615fd0b3715165642f0adfcf6734796a"
	I1219 02:26:37.984129   18217 cri.go:92] found id: "3952448da55ae48b025692fb6527671c951ceba9c3201a0cfc26c58f34736160"
	I1219 02:26:37.984134   18217 cri.go:92] found id: "1bb6f09e7568eb35fc4a1fa4554a64e2a8a2dc5103e913d143cf2351ffea0912"
	I1219 02:26:37.984139   18217 cri.go:92] found id: "889116c0a9d403ff95107f15b2cb4bc324e04429f5963a9814e1ead83b3ec0d5"
	I1219 02:26:37.984143   18217 cri.go:92] found id: "258da604e725d4407e60a6c8389099d1393afa4e245dedbe89d9ec3b45b908c6"
	I1219 02:26:37.984157   18217 cri.go:92] found id: "0576bfa9d823e389e837b502601bc350a20a0ef1b326b80a5a3bdff07282f675"
	I1219 02:26:37.984164   18217 cri.go:92] found id: "97ca5f9b244b723133d2ee7e9cf88c9ff3e0669e3f5144dc9d65f11e422589b0"
	I1219 02:26:37.984169   18217 cri.go:92] found id: "6784d80b9a46542b3e645c6e8d4b04def8668221cabc6ecdcfad0991c2fec046"
	I1219 02:26:37.984176   18217 cri.go:92] found id: "88c063c48a86c4d6b60c5e780eedfbcaaece7ef28eef4a4eed50297457b2f0cd"
	I1219 02:26:37.984181   18217 cri.go:92] found id: "5a1a1413ec4ac23fec4e18501c1dae0c954c686f19b103ea385a88927ea6b4dc"
	I1219 02:26:37.984188   18217 cri.go:92] found id: "8d463ddbcc194b758faa79280521c0dc3091a84ae8b1daed1462125c0f30a12d"
	I1219 02:26:37.984198   18217 cri.go:92] found id: "0775e7ddec4bddf3bf1f8c6146bfa87a31eab965b4cd4b9cdcb81072b3c193df"
	I1219 02:26:37.984204   18217 cri.go:92] found id: "9de7849d09931e8545cb193c0c133e4f48a2478e312f66c751d66daac9199580"
	I1219 02:26:37.984222   18217 cri.go:92] found id: "bbe76e37f22c483601a1b3e1967ff9fb850315576f8e7e47a90c7ed1bab593f3"
	I1219 02:26:37.984230   18217 cri.go:92] found id: "483e903265e322c82b7cfeb6c2cd6fdfb8900212d86fc292511093c09ce04d07"
	I1219 02:26:37.984235   18217 cri.go:92] found id: "a51f3eaf36dba0ef9d223354007cff8f566a886628eed2756e88a76fb84d455f"
	I1219 02:26:37.984242   18217 cri.go:92] found id: "fcf4200f75e689843ddc922c6d57acdb381cebd9891fa659a6990df808c09ea5"
	I1219 02:26:37.984255   18217 cri.go:92] found id: "73473c7fc9bbe226d61054cd86b0c64ebb4b011155ac700760284f1b9be79ac4"
	I1219 02:26:37.984259   18217 cri.go:92] found id: "e6fd793aa75bd52aacacfead162878e4b1933f59937ea6ddfc7b0e7c0084361a"
	I1219 02:26:37.984267   18217 cri.go:92] found id: "8cc6b1da7c4c1244ed3e67c38c6672283870866dfd11de667780d4667c5ce702"
	I1219 02:26:37.984272   18217 cri.go:92] found id: ""
	I1219 02:26:37.984331   18217 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 02:26:37.999204   18217 out.go:203] 
	W1219 02:26:38.000613   18217 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:26:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:26:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 02:26:38.000643   18217 out.go:285] * 
	* 
	W1219 02:26:38.003799   18217 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 02:26:38.004955   18217 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-791857 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.27s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.383697ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-j2n8x" [2570cf1e-ddd4-4270-8adc-05df916b18a2] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002480634s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-wsz68" [1af55b65-71a7-4559-b943-5db589afbf6f] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004558656s
addons_test.go:394: (dbg) Run:  kubectl --context addons-791857 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-791857 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-791857 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.947715583s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-791857 ip
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-791857 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-791857 addons disable registry --alsologtostderr -v=1: exit status 11 (253.700187ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:27:01.292897   21164 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:27:01.293033   21164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:27:01.293041   21164 out.go:374] Setting ErrFile to fd 2...
	I1219 02:27:01.293046   21164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:27:01.293245   21164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:27:01.293483   21164 mustload.go:66] Loading cluster: addons-791857
	I1219 02:27:01.293793   21164 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:27:01.293821   21164 addons.go:638] checking whether the cluster is paused
	I1219 02:27:01.293908   21164 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:27:01.293927   21164 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:27:01.294271   21164 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:27:01.313065   21164 ssh_runner.go:195] Run: systemctl --version
	I1219 02:27:01.313117   21164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:27:01.332053   21164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:27:01.433571   21164 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 02:27:01.433714   21164 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 02:27:01.465526   21164 cri.go:92] found id: "e7ab741310c7159227dc4626e72a83ac3b188e511b198b4eecbe1fbfa72a6a1d"
	I1219 02:27:01.465552   21164 cri.go:92] found id: "96a4c77bc9411da812fbbdb05f10e7f4626e9117b182160f7f703c6ca69da438"
	I1219 02:27:01.465558   21164 cri.go:92] found id: "3b4d17ba4256243dc128f083e9b674110dcf80d83f680893f8d1560a63f34ce8"
	I1219 02:27:01.465563   21164 cri.go:92] found id: "3da71007b4d24d06267e77306cc089ffb24dd26395c3cf25a087bebddec346df"
	I1219 02:27:01.465568   21164 cri.go:92] found id: "464d91d87ed3b2836ea8bd02787437d0615fd0b3715165642f0adfcf6734796a"
	I1219 02:27:01.465574   21164 cri.go:92] found id: "3952448da55ae48b025692fb6527671c951ceba9c3201a0cfc26c58f34736160"
	I1219 02:27:01.465578   21164 cri.go:92] found id: "1bb6f09e7568eb35fc4a1fa4554a64e2a8a2dc5103e913d143cf2351ffea0912"
	I1219 02:27:01.465581   21164 cri.go:92] found id: "889116c0a9d403ff95107f15b2cb4bc324e04429f5963a9814e1ead83b3ec0d5"
	I1219 02:27:01.465584   21164 cri.go:92] found id: "258da604e725d4407e60a6c8389099d1393afa4e245dedbe89d9ec3b45b908c6"
	I1219 02:27:01.465594   21164 cri.go:92] found id: "0576bfa9d823e389e837b502601bc350a20a0ef1b326b80a5a3bdff07282f675"
	I1219 02:27:01.465601   21164 cri.go:92] found id: "97ca5f9b244b723133d2ee7e9cf88c9ff3e0669e3f5144dc9d65f11e422589b0"
	I1219 02:27:01.465604   21164 cri.go:92] found id: "6784d80b9a46542b3e645c6e8d4b04def8668221cabc6ecdcfad0991c2fec046"
	I1219 02:27:01.465607   21164 cri.go:92] found id: "88c063c48a86c4d6b60c5e780eedfbcaaece7ef28eef4a4eed50297457b2f0cd"
	I1219 02:27:01.465610   21164 cri.go:92] found id: "5a1a1413ec4ac23fec4e18501c1dae0c954c686f19b103ea385a88927ea6b4dc"
	I1219 02:27:01.465612   21164 cri.go:92] found id: "8d463ddbcc194b758faa79280521c0dc3091a84ae8b1daed1462125c0f30a12d"
	I1219 02:27:01.465617   21164 cri.go:92] found id: "0775e7ddec4bddf3bf1f8c6146bfa87a31eab965b4cd4b9cdcb81072b3c193df"
	I1219 02:27:01.465622   21164 cri.go:92] found id: "9de7849d09931e8545cb193c0c133e4f48a2478e312f66c751d66daac9199580"
	I1219 02:27:01.465628   21164 cri.go:92] found id: "bbe76e37f22c483601a1b3e1967ff9fb850315576f8e7e47a90c7ed1bab593f3"
	I1219 02:27:01.465637   21164 cri.go:92] found id: "483e903265e322c82b7cfeb6c2cd6fdfb8900212d86fc292511093c09ce04d07"
	I1219 02:27:01.465642   21164 cri.go:92] found id: "a51f3eaf36dba0ef9d223354007cff8f566a886628eed2756e88a76fb84d455f"
	I1219 02:27:01.465650   21164 cri.go:92] found id: "fcf4200f75e689843ddc922c6d57acdb381cebd9891fa659a6990df808c09ea5"
	I1219 02:27:01.465655   21164 cri.go:92] found id: "73473c7fc9bbe226d61054cd86b0c64ebb4b011155ac700760284f1b9be79ac4"
	I1219 02:27:01.465660   21164 cri.go:92] found id: "e6fd793aa75bd52aacacfead162878e4b1933f59937ea6ddfc7b0e7c0084361a"
	I1219 02:27:01.465668   21164 cri.go:92] found id: "8cc6b1da7c4c1244ed3e67c38c6672283870866dfd11de667780d4667c5ce702"
	I1219 02:27:01.465679   21164 cri.go:92] found id: ""
	I1219 02:27:01.465744   21164 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 02:27:01.480336   21164 out.go:203] 
	W1219 02:27:01.481687   21164 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:27:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:27:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 02:27:01.481717   21164 out.go:285] * 
	* 
	W1219 02:27:01.484667   21164 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 02:27:01.485923   21164 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-791857 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.47s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.42s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.386344ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-791857
addons_test.go:334: (dbg) Run:  kubectl --context addons-791857 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-791857 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-791857 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (245.763789ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:27:01.721847   21308 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:27:01.721991   21308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:27:01.722001   21308 out.go:374] Setting ErrFile to fd 2...
	I1219 02:27:01.722005   21308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:27:01.722206   21308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:27:01.722455   21308 mustload.go:66] Loading cluster: addons-791857
	I1219 02:27:01.722777   21308 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:27:01.722796   21308 addons.go:638] checking whether the cluster is paused
	I1219 02:27:01.722876   21308 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:27:01.722888   21308 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:27:01.723244   21308 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:27:01.742474   21308 ssh_runner.go:195] Run: systemctl --version
	I1219 02:27:01.742531   21308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:27:01.759770   21308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:27:01.859379   21308 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 02:27:01.859457   21308 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 02:27:01.887918   21308 cri.go:92] found id: "e7ab741310c7159227dc4626e72a83ac3b188e511b198b4eecbe1fbfa72a6a1d"
	I1219 02:27:01.887946   21308 cri.go:92] found id: "96a4c77bc9411da812fbbdb05f10e7f4626e9117b182160f7f703c6ca69da438"
	I1219 02:27:01.887952   21308 cri.go:92] found id: "3b4d17ba4256243dc128f083e9b674110dcf80d83f680893f8d1560a63f34ce8"
	I1219 02:27:01.887958   21308 cri.go:92] found id: "3da71007b4d24d06267e77306cc089ffb24dd26395c3cf25a087bebddec346df"
	I1219 02:27:01.887962   21308 cri.go:92] found id: "464d91d87ed3b2836ea8bd02787437d0615fd0b3715165642f0adfcf6734796a"
	I1219 02:27:01.887967   21308 cri.go:92] found id: "3952448da55ae48b025692fb6527671c951ceba9c3201a0cfc26c58f34736160"
	I1219 02:27:01.887972   21308 cri.go:92] found id: "1bb6f09e7568eb35fc4a1fa4554a64e2a8a2dc5103e913d143cf2351ffea0912"
	I1219 02:27:01.887977   21308 cri.go:92] found id: "889116c0a9d403ff95107f15b2cb4bc324e04429f5963a9814e1ead83b3ec0d5"
	I1219 02:27:01.887982   21308 cri.go:92] found id: "258da604e725d4407e60a6c8389099d1393afa4e245dedbe89d9ec3b45b908c6"
	I1219 02:27:01.887989   21308 cri.go:92] found id: "0576bfa9d823e389e837b502601bc350a20a0ef1b326b80a5a3bdff07282f675"
	I1219 02:27:01.887994   21308 cri.go:92] found id: "97ca5f9b244b723133d2ee7e9cf88c9ff3e0669e3f5144dc9d65f11e422589b0"
	I1219 02:27:01.887998   21308 cri.go:92] found id: "6784d80b9a46542b3e645c6e8d4b04def8668221cabc6ecdcfad0991c2fec046"
	I1219 02:27:01.888008   21308 cri.go:92] found id: "88c063c48a86c4d6b60c5e780eedfbcaaece7ef28eef4a4eed50297457b2f0cd"
	I1219 02:27:01.888011   21308 cri.go:92] found id: "5a1a1413ec4ac23fec4e18501c1dae0c954c686f19b103ea385a88927ea6b4dc"
	I1219 02:27:01.888017   21308 cri.go:92] found id: "8d463ddbcc194b758faa79280521c0dc3091a84ae8b1daed1462125c0f30a12d"
	I1219 02:27:01.888024   21308 cri.go:92] found id: "0775e7ddec4bddf3bf1f8c6146bfa87a31eab965b4cd4b9cdcb81072b3c193df"
	I1219 02:27:01.888027   21308 cri.go:92] found id: "9de7849d09931e8545cb193c0c133e4f48a2478e312f66c751d66daac9199580"
	I1219 02:27:01.888031   21308 cri.go:92] found id: "bbe76e37f22c483601a1b3e1967ff9fb850315576f8e7e47a90c7ed1bab593f3"
	I1219 02:27:01.888034   21308 cri.go:92] found id: "483e903265e322c82b7cfeb6c2cd6fdfb8900212d86fc292511093c09ce04d07"
	I1219 02:27:01.888037   21308 cri.go:92] found id: "a51f3eaf36dba0ef9d223354007cff8f566a886628eed2756e88a76fb84d455f"
	I1219 02:27:01.888047   21308 cri.go:92] found id: "fcf4200f75e689843ddc922c6d57acdb381cebd9891fa659a6990df808c09ea5"
	I1219 02:27:01.888053   21308 cri.go:92] found id: "73473c7fc9bbe226d61054cd86b0c64ebb4b011155ac700760284f1b9be79ac4"
	I1219 02:27:01.888056   21308 cri.go:92] found id: "e6fd793aa75bd52aacacfead162878e4b1933f59937ea6ddfc7b0e7c0084361a"
	I1219 02:27:01.888060   21308 cri.go:92] found id: "8cc6b1da7c4c1244ed3e67c38c6672283870866dfd11de667780d4667c5ce702"
	I1219 02:27:01.888063   21308 cri.go:92] found id: ""
	I1219 02:27:01.888099   21308 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 02:27:01.902498   21308 out.go:203] 
	W1219 02:27:01.903868   21308 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:27:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:27:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 02:27:01.903889   21308 out.go:285] * 
	* 
	W1219 02:27:01.906812   21308 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 02:27:01.908196   21308 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-791857 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.42s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-791857 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-791857 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-791857 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [78832942-35c4-4f34-b5fe-f92e36df47e9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [78832942-35c4-4f34-b5fe-f92e36df47e9] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.00294643s
I1219 02:27:04.598342    8536 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-791857 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-791857 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.823857132s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-791857 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-791857 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-791857
helpers_test.go:244: (dbg) docker inspect addons-791857:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f8c6486dcdf905ba469992334e6723f1c1a055a4dff1868fe30cba7b5fac673",
	        "Created": "2025-12-19T02:25:26.750670587Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 10952,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T02:25:26.783958646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/5f8c6486dcdf905ba469992334e6723f1c1a055a4dff1868fe30cba7b5fac673/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f8c6486dcdf905ba469992334e6723f1c1a055a4dff1868fe30cba7b5fac673/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f8c6486dcdf905ba469992334e6723f1c1a055a4dff1868fe30cba7b5fac673/hosts",
	        "LogPath": "/var/lib/docker/containers/5f8c6486dcdf905ba469992334e6723f1c1a055a4dff1868fe30cba7b5fac673/5f8c6486dcdf905ba469992334e6723f1c1a055a4dff1868fe30cba7b5fac673-json.log",
	        "Name": "/addons-791857",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-791857:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-791857",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5f8c6486dcdf905ba469992334e6723f1c1a055a4dff1868fe30cba7b5fac673",
	                "LowerDir": "/var/lib/docker/overlay2/42ba77f82f3c4f7d364e625b82335250adc58df585512dacfc98666463bf98fa-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/42ba77f82f3c4f7d364e625b82335250adc58df585512dacfc98666463bf98fa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/42ba77f82f3c4f7d364e625b82335250adc58df585512dacfc98666463bf98fa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/42ba77f82f3c4f7d364e625b82335250adc58df585512dacfc98666463bf98fa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-791857",
	                "Source": "/var/lib/docker/volumes/addons-791857/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-791857",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-791857",
	                "name.minikube.sigs.k8s.io": "addons-791857",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "838464d46434f3e6463480e0b499a0493111eab4df4e3ed6e548d8abe7075335",
	            "SandboxKey": "/var/run/docker/netns/838464d46434",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-791857": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "002009ae9763ecdde824289a99be22a5caad9b24ec2d08c4f4654f0b0a112e69",
	                    "EndpointID": "c7c4e5f1a40685763891b4a90d2cf2e6d789f276ded6a42c5b647bbe1445ce01",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "fa:0e:12:84:f7:6e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-791857",
	                        "5f8c6486dcdf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-791857 -n addons-791857
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-791857 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-791857 logs -n 25: (1.147807866s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-072289 --alsologtostderr --binary-mirror http://127.0.0.1:46753 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-072289 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	│ delete  │ -p binary-mirror-072289                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-072289 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ addons  │ disable dashboard -p addons-791857                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	│ addons  │ enable dashboard -p addons-791857                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	│ start   │ -p addons-791857 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:26 UTC │
	│ addons  │ addons-791857 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:26 UTC │                     │
	│ addons  │ addons-791857 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:26 UTC │                     │
	│ addons  │ enable headlamp -p addons-791857 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:26 UTC │                     │
	│ addons  │ addons-791857 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:26 UTC │                     │
	│ addons  │ addons-791857 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:26 UTC │                     │
	│ ssh     │ addons-791857 ssh cat /opt/local-path-provisioner/pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:26 UTC │ 19 Dec 25 02:26 UTC │
	│ addons  │ addons-791857 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:26 UTC │                     │
	│ addons  │ addons-791857 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:26 UTC │                     │
	│ addons  │ addons-791857 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:26 UTC │                     │
	│ ip      │ addons-791857 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:27 UTC │ 19 Dec 25 02:27 UTC │
	│ addons  │ addons-791857 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:27 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-791857                                                                                                                                                                                                                                                                                                                                                                                           │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:27 UTC │ 19 Dec 25 02:27 UTC │
	│ addons  │ addons-791857 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:27 UTC │                     │
	│ addons  │ addons-791857 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:27 UTC │                     │
	│ ssh     │ addons-791857 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:27 UTC │                     │
	│ addons  │ addons-791857 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:27 UTC │                     │
	│ addons  │ addons-791857 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:27 UTC │                     │
	│ addons  │ addons-791857 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:27 UTC │                     │
	│ addons  │ addons-791857 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:27 UTC │                     │
	│ ip      │ addons-791857 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-791857        │ jenkins │ v1.37.0 │ 19 Dec 25 02:29 UTC │ 19 Dec 25 02:29 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:25:04
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:25:04.233753   10286 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:25:04.233984   10286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:25:04.233991   10286 out.go:374] Setting ErrFile to fd 2...
	I1219 02:25:04.233995   10286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:25:04.234162   10286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:25:04.234629   10286 out.go:368] Setting JSON to false
	I1219 02:25:04.235390   10286 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":455,"bootTime":1766110649,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:25:04.235439   10286 start.go:143] virtualization: kvm guest
	I1219 02:25:04.237276   10286 out.go:179] * [addons-791857] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:25:04.238460   10286 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:25:04.238462   10286 notify.go:221] Checking for updates...
	I1219 02:25:04.240817   10286 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:25:04.241981   10286 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 02:25:04.243184   10286 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 02:25:04.244275   10286 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:25:04.245336   10286 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:25:04.246562   10286 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:25:04.268847   10286 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:25:04.268925   10286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:25:04.321388   10286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-19 02:25:04.311323087 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:25:04.321499   10286 docker.go:319] overlay module found
	I1219 02:25:04.323816   10286 out.go:179] * Using the docker driver based on user configuration
	I1219 02:25:04.324818   10286 start.go:309] selected driver: docker
	I1219 02:25:04.324832   10286 start.go:928] validating driver "docker" against <nil>
	I1219 02:25:04.324842   10286 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:25:04.325376   10286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:25:04.377850   10286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-19 02:25:04.368832387 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:25:04.378045   10286 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 02:25:04.378236   10286 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 02:25:04.379857   10286 out.go:179] * Using Docker driver with root privileges
	I1219 02:25:04.381096   10286 cni.go:84] Creating CNI manager for ""
	I1219 02:25:04.381200   10286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 02:25:04.381212   10286 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1219 02:25:04.381280   10286 start.go:353] cluster config:
	{Name:addons-791857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-791857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1219 02:25:04.382667   10286 out.go:179] * Starting "addons-791857" primary control-plane node in "addons-791857" cluster
	I1219 02:25:04.383963   10286 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 02:25:04.385169   10286 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 02:25:04.386477   10286 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 02:25:04.386518   10286 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 02:25:04.386530   10286 cache.go:65] Caching tarball of preloaded images
	I1219 02:25:04.386563   10286 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 02:25:04.386622   10286 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 02:25:04.386633   10286 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 02:25:04.386998   10286 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/config.json ...
	I1219 02:25:04.387024   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/config.json: {Name:mk2fa1c08becfda12e3568c02e4dcff816f2d73e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:04.404690   10286 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 to local cache
	I1219 02:25:04.404821   10286 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local cache directory
	I1219 02:25:04.404839   10286 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local cache directory, skipping pull
	I1219 02:25:04.404844   10286 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in cache, skipping pull
	I1219 02:25:04.404851   10286 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 as a tarball
	I1219 02:25:04.404858   10286 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 from local cache
	I1219 02:25:18.395521   10286 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 from cached tarball
	I1219 02:25:18.395561   10286 cache.go:243] Successfully downloaded all kic artifacts
	I1219 02:25:18.395610   10286 start.go:360] acquireMachinesLock for addons-791857: {Name:mke15be50e9dd63ff80b5d97d17892540ef58ee6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 02:25:18.395730   10286 start.go:364] duration metric: took 97.595µs to acquireMachinesLock for "addons-791857"
	I1219 02:25:18.395757   10286 start.go:93] Provisioning new machine with config: &{Name:addons-791857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-791857 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 02:25:18.395827   10286 start.go:125] createHost starting for "" (driver="docker")
	I1219 02:25:18.397541   10286 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1219 02:25:18.397783   10286 start.go:159] libmachine.API.Create for "addons-791857" (driver="docker")
	I1219 02:25:18.397821   10286 client.go:173] LocalClient.Create starting
	I1219 02:25:18.397912   10286 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem
	I1219 02:25:18.488696   10286 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem
	I1219 02:25:18.553284   10286 cli_runner.go:164] Run: docker network inspect addons-791857 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1219 02:25:18.570499   10286 cli_runner.go:211] docker network inspect addons-791857 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1219 02:25:18.570574   10286 network_create.go:284] running [docker network inspect addons-791857] to gather additional debugging logs...
	I1219 02:25:18.570593   10286 cli_runner.go:164] Run: docker network inspect addons-791857
	W1219 02:25:18.586398   10286 cli_runner.go:211] docker network inspect addons-791857 returned with exit code 1
	I1219 02:25:18.586427   10286 network_create.go:287] error running [docker network inspect addons-791857]: docker network inspect addons-791857: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-791857 not found
	I1219 02:25:18.586442   10286 network_create.go:289] output of [docker network inspect addons-791857]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-791857 not found
	
	** /stderr **
	I1219 02:25:18.586517   10286 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 02:25:18.602921   10286 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eca860}
	I1219 02:25:18.602970   10286 network_create.go:124] attempt to create docker network addons-791857 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1219 02:25:18.603020   10286 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-791857 addons-791857
	I1219 02:25:18.649061   10286 network_create.go:108] docker network addons-791857 192.168.49.0/24 created
	I1219 02:25:18.649088   10286 kic.go:121] calculated static IP "192.168.49.2" for the "addons-791857" container
	I1219 02:25:18.649167   10286 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1219 02:25:18.665603   10286 cli_runner.go:164] Run: docker volume create addons-791857 --label name.minikube.sigs.k8s.io=addons-791857 --label created_by.minikube.sigs.k8s.io=true
	I1219 02:25:18.682555   10286 oci.go:103] Successfully created a docker volume addons-791857
	I1219 02:25:18.682626   10286 cli_runner.go:164] Run: docker run --rm --name addons-791857-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-791857 --entrypoint /usr/bin/test -v addons-791857:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1219 02:25:22.841900   10286 cli_runner.go:217] Completed: docker run --rm --name addons-791857-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-791857 --entrypoint /usr/bin/test -v addons-791857:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib: (4.159232605s)
	I1219 02:25:22.841935   10286 oci.go:107] Successfully prepared a docker volume addons-791857
	I1219 02:25:22.841999   10286 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 02:25:22.842011   10286 kic.go:194] Starting extracting preloaded images to volume ...
	I1219 02:25:22.842052   10286 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-791857:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1219 02:25:26.678998   10286 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-791857:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.836891023s)
	I1219 02:25:26.679032   10286 kic.go:203] duration metric: took 3.837018565s to extract preloaded images to volume ...
	W1219 02:25:26.679181   10286 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1219 02:25:26.679259   10286 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1219 02:25:26.679314   10286 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1219 02:25:26.734047   10286 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-791857 --name addons-791857 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-791857 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-791857 --network addons-791857 --ip 192.168.49.2 --volume addons-791857:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1219 02:25:27.018248   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Running}}
	I1219 02:25:27.035913   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:27.054936   10286 cli_runner.go:164] Run: docker exec addons-791857 stat /var/lib/dpkg/alternatives/iptables
	I1219 02:25:27.103938   10286 oci.go:144] the created container "addons-791857" has a running status.
	I1219 02:25:27.103971   10286 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa...
	I1219 02:25:27.175422   10286 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1219 02:25:27.201189   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:27.218240   10286 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1219 02:25:27.218262   10286 kic_runner.go:114] Args: [docker exec --privileged addons-791857 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1219 02:25:27.285821   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:27.311266   10286 machine.go:94] provisionDockerMachine start ...
	I1219 02:25:27.311467   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:27.335022   10286 main.go:144] libmachine: Using SSH client type: native
	I1219 02:25:27.335271   10286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1219 02:25:27.335296   10286 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 02:25:27.483961   10286 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-791857
	
	I1219 02:25:27.483992   10286 ubuntu.go:182] provisioning hostname "addons-791857"
	I1219 02:25:27.484056   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:27.503113   10286 main.go:144] libmachine: Using SSH client type: native
	I1219 02:25:27.503324   10286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1219 02:25:27.503336   10286 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-791857 && echo "addons-791857" | sudo tee /etc/hostname
	I1219 02:25:27.658171   10286 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-791857
	
	I1219 02:25:27.658277   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:27.678221   10286 main.go:144] libmachine: Using SSH client type: native
	I1219 02:25:27.678440   10286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1219 02:25:27.678455   10286 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-791857' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-791857/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-791857' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 02:25:27.822244   10286 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 02:25:27.822271   10286 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 02:25:27.822289   10286 ubuntu.go:190] setting up certificates
	I1219 02:25:27.822298   10286 provision.go:84] configureAuth start
	I1219 02:25:27.822347   10286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-791857
	I1219 02:25:27.838923   10286 provision.go:143] copyHostCerts
	I1219 02:25:27.838995   10286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 02:25:27.839116   10286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 02:25:27.839186   10286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 02:25:27.839256   10286 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.addons-791857 san=[127.0.0.1 192.168.49.2 addons-791857 localhost minikube]
	I1219 02:25:27.981743   10286 provision.go:177] copyRemoteCerts
	I1219 02:25:27.981798   10286 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 02:25:27.981830   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:27.998222   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:28.099838   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1219 02:25:28.117695   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 02:25:28.133574   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 02:25:28.149581   10286 provision.go:87] duration metric: took 327.266981ms to configureAuth
	I1219 02:25:28.149614   10286 ubuntu.go:206] setting minikube options for container-runtime
	I1219 02:25:28.149805   10286 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:25:28.149920   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:28.166767   10286 main.go:144] libmachine: Using SSH client type: native
	I1219 02:25:28.166977   10286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1219 02:25:28.166996   10286 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 02:25:28.442308   10286 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 02:25:28.442335   10286 machine.go:97] duration metric: took 1.131045992s to provisionDockerMachine
	I1219 02:25:28.442346   10286 client.go:176] duration metric: took 10.044512243s to LocalClient.Create
	I1219 02:25:28.442368   10286 start.go:167] duration metric: took 10.044583292s to libmachine.API.Create "addons-791857"
	I1219 02:25:28.442378   10286 start.go:293] postStartSetup for "addons-791857" (driver="docker")
	I1219 02:25:28.442392   10286 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 02:25:28.442443   10286 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 02:25:28.442481   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:28.460137   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:28.562861   10286 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 02:25:28.566301   10286 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 02:25:28.566336   10286 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 02:25:28.566350   10286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 02:25:28.566401   10286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 02:25:28.566424   10286 start.go:296] duration metric: took 124.03895ms for postStartSetup
	I1219 02:25:28.566685   10286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-791857
	I1219 02:25:28.584664   10286 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/config.json ...
	I1219 02:25:28.584935   10286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 02:25:28.584975   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:28.602741   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:28.699523   10286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 02:25:28.703872   10286 start.go:128] duration metric: took 10.308030596s to createHost
	I1219 02:25:28.703902   10286 start.go:83] releasing machines lock for "addons-791857", held for 10.308157941s
	I1219 02:25:28.703966   10286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-791857
	I1219 02:25:28.722345   10286 ssh_runner.go:195] Run: cat /version.json
	I1219 02:25:28.722390   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:28.722445   10286 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 02:25:28.722529   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:28.740225   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:28.740553   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:28.890252   10286 ssh_runner.go:195] Run: systemctl --version
	I1219 02:25:28.896596   10286 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 02:25:28.928747   10286 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 02:25:28.933137   10286 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 02:25:28.933211   10286 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 02:25:28.957836   10286 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 02:25:28.957858   10286 start.go:496] detecting cgroup driver to use...
	I1219 02:25:28.957886   10286 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 02:25:28.957921   10286 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 02:25:28.973556   10286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 02:25:28.985182   10286 docker.go:218] disabling cri-docker service (if available) ...
	I1219 02:25:28.985231   10286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 02:25:29.001110   10286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 02:25:29.018195   10286 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 02:25:29.098268   10286 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 02:25:29.184999   10286 docker.go:234] disabling docker service ...
	I1219 02:25:29.185059   10286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 02:25:29.202849   10286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 02:25:29.214848   10286 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 02:25:29.297717   10286 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 02:25:29.376887   10286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 02:25:29.388775   10286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 02:25:29.402373   10286 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 02:25:29.402425   10286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 02:25:29.412126   10286 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 02:25:29.412183   10286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 02:25:29.420408   10286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 02:25:29.428468   10286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 02:25:29.436597   10286 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 02:25:29.444031   10286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 02:25:29.451938   10286 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 02:25:29.464462   10286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 02:25:29.472834   10286 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 02:25:29.479741   10286 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1219 02:25:29.479784   10286 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1219 02:25:29.491090   10286 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 02:25:29.498018   10286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 02:25:29.576730   10286 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 02:25:29.701964   10286 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 02:25:29.702031   10286 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 02:25:29.705732   10286 start.go:564] Will wait 60s for crictl version
	I1219 02:25:29.705777   10286 ssh_runner.go:195] Run: which crictl
	I1219 02:25:29.709240   10286 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 02:25:29.734227   10286 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 02:25:29.734349   10286 ssh_runner.go:195] Run: crio --version
	I1219 02:25:29.760835   10286 ssh_runner.go:195] Run: crio --version
	I1219 02:25:29.788528   10286 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1219 02:25:29.789820   10286 cli_runner.go:164] Run: docker network inspect addons-791857 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 02:25:29.806005   10286 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1219 02:25:29.809920   10286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 02:25:29.819487   10286 kubeadm.go:884] updating cluster {Name:addons-791857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-791857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 02:25:29.819588   10286 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 02:25:29.819627   10286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 02:25:29.850327   10286 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 02:25:29.850350   10286 crio.go:433] Images already preloaded, skipping extraction
	I1219 02:25:29.850395   10286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 02:25:29.874266   10286 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 02:25:29.874290   10286 cache_images.go:86] Images are preloaded, skipping loading
	I1219 02:25:29.874300   10286 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1219 02:25:29.874396   10286 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-791857 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-791857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 02:25:29.874470   10286 ssh_runner.go:195] Run: crio config
	I1219 02:25:29.918571   10286 cni.go:84] Creating CNI manager for ""
	I1219 02:25:29.918593   10286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 02:25:29.918611   10286 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 02:25:29.918630   10286 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-791857 NodeName:addons-791857 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 02:25:29.918769   10286 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-791857"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 02:25:29.918828   10286 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 02:25:29.926910   10286 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 02:25:29.926971   10286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 02:25:29.934414   10286 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1219 02:25:29.946643   10286 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 02:25:29.962193   10286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1219 02:25:29.974675   10286 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1219 02:25:29.978112   10286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 02:25:29.987559   10286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 02:25:30.065730   10286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 02:25:30.089996   10286 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857 for IP: 192.168.49.2
	I1219 02:25:30.090020   10286 certs.go:195] generating shared ca certs ...
	I1219 02:25:30.090039   10286 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.090167   10286 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 02:25:30.125089   10286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt ...
	I1219 02:25:30.125122   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt: {Name:mk93220370fd0ee656707aaf7bad7ac75f80cf62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.125297   10286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key ...
	I1219 02:25:30.125314   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key: {Name:mk6464b375ea664b0b7e6aac31ae3239976bcb34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.125419   10286 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 02:25:30.246455   10286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt ...
	I1219 02:25:30.246486   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt: {Name:mk640a70a316662d907929b9a6ee35a513d55016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.246673   10286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key ...
	I1219 02:25:30.246690   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key: {Name:mk70fcf1f094cda035aaf61abcc62f5350f14d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.246815   10286 certs.go:257] generating profile certs ...
	I1219 02:25:30.246889   10286 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.key
	I1219 02:25:30.246909   10286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt with IP's: []
	I1219 02:25:30.333708   10286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt ...
	I1219 02:25:30.333743   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: {Name:mk47664c75fc7928eb0378a2045a0e3158f05ea2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.333940   10286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.key ...
	I1219 02:25:30.333965   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.key: {Name:mk78f96ac0759c1b26f6587875ae07d3e99d23a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.334075   10286 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.key.0bee3924
	I1219 02:25:30.334099   10286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.crt.0bee3924 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1219 02:25:30.388337   10286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.crt.0bee3924 ...
	I1219 02:25:30.388369   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.crt.0bee3924: {Name:mkaf9f8498bba7027ed427dbd927c08f82436f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.388563   10286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.key.0bee3924 ...
	I1219 02:25:30.388582   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.key.0bee3924: {Name:mkdf31ea46a1019e3fe6ae1a8ee9803300003eb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.388697   10286 certs.go:382] copying /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.crt.0bee3924 -> /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.crt
	I1219 02:25:30.388829   10286 certs.go:386] copying /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.key.0bee3924 -> /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.key
	I1219 02:25:30.388920   10286 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/proxy-client.key
	I1219 02:25:30.388959   10286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/proxy-client.crt with IP's: []
	I1219 02:25:30.479358   10286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/proxy-client.crt ...
	I1219 02:25:30.479392   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/proxy-client.crt: {Name:mka5b06da2b5b4397dd3d6cfa800284c5f8ab7fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.479583   10286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/proxy-client.key ...
	I1219 02:25:30.479608   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/proxy-client.key: {Name:mk043929d09112de1348210222f596debf0d0a3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.479825   10286 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 02:25:30.479885   10286 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 02:25:30.479925   10286 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 02:25:30.479970   10286 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 02:25:30.480560   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 02:25:30.498120   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 02:25:30.514695   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 02:25:30.531830   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 02:25:30.548964   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1219 02:25:30.565382   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 02:25:30.581754   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 02:25:30.597568   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 02:25:30.613572   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 02:25:30.631783   10286 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 02:25:30.643313   10286 ssh_runner.go:195] Run: openssl version
	I1219 02:25:30.649137   10286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 02:25:30.655820   10286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 02:25:30.665419   10286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 02:25:30.668902   10286 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 02:25:30.668959   10286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 02:25:30.701980   10286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 02:25:30.709501   10286 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 02:25:30.716611   10286 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 02:25:30.719944   10286 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1219 02:25:30.719987   10286 kubeadm.go:401] StartCluster: {Name:addons-791857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-791857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:25:30.720048   10286 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 02:25:30.720084   10286 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 02:25:30.746307   10286 cri.go:92] found id: ""
	I1219 02:25:30.746383   10286 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 02:25:30.754276   10286 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 02:25:30.762093   10286 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1219 02:25:30.762158   10286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 02:25:30.769634   10286 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 02:25:30.769657   10286 kubeadm.go:158] found existing configuration files:
	
	I1219 02:25:30.769712   10286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1219 02:25:30.776776   10286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 02:25:30.776834   10286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 02:25:30.783694   10286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1219 02:25:30.790973   10286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 02:25:30.791033   10286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 02:25:30.798795   10286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1219 02:25:30.805922   10286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 02:25:30.805979   10286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 02:25:30.812654   10286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1219 02:25:30.819775   10286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 02:25:30.819831   10286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 02:25:30.826696   10286 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1219 02:25:30.892858   10286 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1219 02:25:30.947250   10286 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1219 02:25:40.372444   10286 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1219 02:25:40.372541   10286 kubeadm.go:319] [preflight] Running pre-flight checks
	I1219 02:25:40.372651   10286 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1219 02:25:40.372747   10286 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1219 02:25:40.372793   10286 kubeadm.go:319] OS: Linux
	I1219 02:25:40.372857   10286 kubeadm.go:319] CGROUPS_CPU: enabled
	I1219 02:25:40.372945   10286 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1219 02:25:40.373022   10286 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1219 02:25:40.373096   10286 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1219 02:25:40.373182   10286 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1219 02:25:40.373231   10286 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1219 02:25:40.373278   10286 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1219 02:25:40.373328   10286 kubeadm.go:319] CGROUPS_IO: enabled
	I1219 02:25:40.373432   10286 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1219 02:25:40.373578   10286 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1219 02:25:40.373764   10286 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1219 02:25:40.373830   10286 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1219 02:25:40.375764   10286 out.go:252]   - Generating certificates and keys ...
	I1219 02:25:40.375852   10286 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1219 02:25:40.375939   10286 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1219 02:25:40.376045   10286 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1219 02:25:40.376117   10286 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1219 02:25:40.376195   10286 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1219 02:25:40.376265   10286 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1219 02:25:40.376347   10286 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1219 02:25:40.376479   10286 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-791857 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1219 02:25:40.376570   10286 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1219 02:25:40.376741   10286 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-791857 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1219 02:25:40.376852   10286 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1219 02:25:40.376915   10286 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1219 02:25:40.376957   10286 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1219 02:25:40.377007   10286 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1219 02:25:40.377053   10286 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1219 02:25:40.377117   10286 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1219 02:25:40.377190   10286 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1219 02:25:40.377285   10286 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1219 02:25:40.377365   10286 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1219 02:25:40.377481   10286 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1219 02:25:40.377571   10286 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1219 02:25:40.378737   10286 out.go:252]   - Booting up control plane ...
	I1219 02:25:40.378826   10286 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1219 02:25:40.378928   10286 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1219 02:25:40.379018   10286 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1219 02:25:40.379169   10286 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1219 02:25:40.379263   10286 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1219 02:25:40.379386   10286 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1219 02:25:40.379493   10286 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1219 02:25:40.379552   10286 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1219 02:25:40.379723   10286 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1219 02:25:40.379842   10286 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1219 02:25:40.379919   10286 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001369512s
	I1219 02:25:40.380038   10286 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1219 02:25:40.380149   10286 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1219 02:25:40.380273   10286 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1219 02:25:40.380384   10286 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1219 02:25:40.380495   10286 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.360265797s
	I1219 02:25:40.380594   10286 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.975499535s
	I1219 02:25:40.380692   10286 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501087246s
	I1219 02:25:40.380831   10286 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1219 02:25:40.380964   10286 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1219 02:25:40.381014   10286 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1219 02:25:40.381175   10286 kubeadm.go:319] [mark-control-plane] Marking the node addons-791857 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1219 02:25:40.381233   10286 kubeadm.go:319] [bootstrap-token] Using token: fc8dpx.s77uezw1ei6hvydq
	I1219 02:25:40.382476   10286 out.go:252]   - Configuring RBAC rules ...
	I1219 02:25:40.382576   10286 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1219 02:25:40.382648   10286 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1219 02:25:40.382796   10286 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1219 02:25:40.382912   10286 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1219 02:25:40.383021   10286 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1219 02:25:40.383100   10286 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1219 02:25:40.383202   10286 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1219 02:25:40.383246   10286 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1219 02:25:40.383286   10286 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1219 02:25:40.383291   10286 kubeadm.go:319] 
	I1219 02:25:40.383352   10286 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1219 02:25:40.383358   10286 kubeadm.go:319] 
	I1219 02:25:40.383421   10286 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1219 02:25:40.383427   10286 kubeadm.go:319] 
	I1219 02:25:40.383448   10286 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1219 02:25:40.383499   10286 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1219 02:25:40.383543   10286 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1219 02:25:40.383549   10286 kubeadm.go:319] 
	I1219 02:25:40.383601   10286 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1219 02:25:40.383612   10286 kubeadm.go:319] 
	I1219 02:25:40.383649   10286 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1219 02:25:40.383654   10286 kubeadm.go:319] 
	I1219 02:25:40.383707   10286 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1219 02:25:40.383771   10286 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1219 02:25:40.383829   10286 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1219 02:25:40.383838   10286 kubeadm.go:319] 
	I1219 02:25:40.383906   10286 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1219 02:25:40.384018   10286 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1219 02:25:40.384030   10286 kubeadm.go:319] 
	I1219 02:25:40.384150   10286 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fc8dpx.s77uezw1ei6hvydq \
	I1219 02:25:40.384253   10286 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e8b10e60f3db527579d1c34bb8f4e490eb8eee3e7862dee81a2c160635afa3a8 \
	I1219 02:25:40.384272   10286 kubeadm.go:319] 	--control-plane 
	I1219 02:25:40.384277   10286 kubeadm.go:319] 
	I1219 02:25:40.384357   10286 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1219 02:25:40.384365   10286 kubeadm.go:319] 
	I1219 02:25:40.384443   10286 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fc8dpx.s77uezw1ei6hvydq \
	I1219 02:25:40.384544   10286 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e8b10e60f3db527579d1c34bb8f4e490eb8eee3e7862dee81a2c160635afa3a8 
	I1219 02:25:40.384555   10286 cni.go:84] Creating CNI manager for ""
	I1219 02:25:40.384562   10286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 02:25:40.386493   10286 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1219 02:25:40.387478   10286 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1219 02:25:40.391679   10286 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1219 02:25:40.391714   10286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1219 02:25:40.404510   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1219 02:25:40.609593   10286 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 02:25:40.609685   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:40.609745   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-791857 minikube.k8s.io/updated_at=2025_12_19T02_25_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6 minikube.k8s.io/name=addons-791857 minikube.k8s.io/primary=true
	I1219 02:25:40.619284   10286 ops.go:34] apiserver oom_adj: -16
	I1219 02:25:40.681946   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:41.182990   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:41.682637   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:42.182360   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:42.682989   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:43.182968   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:43.682066   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:44.182537   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:44.682185   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:45.182024   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:45.242045   10286 kubeadm.go:1114] duration metric: took 4.632411731s to wait for elevateKubeSystemPrivileges
	I1219 02:25:45.242084   10286 kubeadm.go:403] duration metric: took 14.522098487s to StartCluster
	I1219 02:25:45.242107   10286 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:45.242231   10286 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 02:25:45.242572   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:45.242801   10286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1219 02:25:45.242825   10286 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 02:25:45.242885   10286 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1219 02:25:45.243008   10286 addons.go:70] Setting yakd=true in profile "addons-791857"
	I1219 02:25:45.243020   10286 addons.go:70] Setting ingress-dns=true in profile "addons-791857"
	I1219 02:25:45.243038   10286 addons.go:70] Setting storage-provisioner=true in profile "addons-791857"
	I1219 02:25:45.243049   10286 addons.go:239] Setting addon storage-provisioner=true in "addons-791857"
	I1219 02:25:45.243054   10286 addons.go:239] Setting addon ingress-dns=true in "addons-791857"
	I1219 02:25:45.243049   10286 addons.go:70] Setting registry-creds=true in profile "addons-791857"
	I1219 02:25:45.243052   10286 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-791857"
	I1219 02:25:45.243073   10286 addons.go:239] Setting addon registry-creds=true in "addons-791857"
	I1219 02:25:45.243082   10286 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:25:45.243092   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.243099   10286 addons.go:70] Setting volcano=true in profile "addons-791857"
	I1219 02:25:45.243103   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.243111   10286 addons.go:239] Setting addon volcano=true in "addons-791857"
	I1219 02:25:45.243091   10286 addons.go:70] Setting gcp-auth=true in profile "addons-791857"
	I1219 02:25:45.243131   10286 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-791857"
	I1219 02:25:45.243140   10286 addons.go:70] Setting volumesnapshots=true in profile "addons-791857"
	I1219 02:25:45.243151   10286 mustload.go:66] Loading cluster: addons-791857
	I1219 02:25:45.243154   10286 addons.go:239] Setting addon volumesnapshots=true in "addons-791857"
	I1219 02:25:45.243161   10286 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-791857"
	I1219 02:25:45.243169   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.243179   10286 addons.go:70] Setting cloud-spanner=true in profile "addons-791857"
	I1219 02:25:45.243209   10286 addons.go:239] Setting addon cloud-spanner=true in "addons-791857"
	I1219 02:25:45.243226   10286 addons.go:70] Setting metrics-server=true in profile "addons-791857"
	I1219 02:25:45.243228   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.243238   10286 addons.go:239] Setting addon metrics-server=true in "addons-791857"
	I1219 02:25:45.243263   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.243379   10286 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:25:45.243632   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.243634   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.243637   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.243643   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.243660   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.243713   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.243824   10286 addons.go:70] Setting ingress=true in profile "addons-791857"
	I1219 02:25:45.243886   10286 addons.go:239] Setting addon ingress=true in "addons-791857"
	I1219 02:25:45.244038   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.243030   10286 addons.go:239] Setting addon yakd=true in "addons-791857"
	I1219 02:25:45.244337   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.244404   10286 addons.go:70] Setting inspektor-gadget=true in profile "addons-791857"
	I1219 02:25:45.244473   10286 addons.go:239] Setting addon inspektor-gadget=true in "addons-791857"
	I1219 02:25:45.243152   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.243083   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.243133   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.244672   10286 addons.go:70] Setting default-storageclass=true in profile "addons-791857"
	I1219 02:25:45.244751   10286 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-791857"
	I1219 02:25:45.244825   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.245079   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.244683   10286 addons.go:70] Setting registry=true in profile "addons-791857"
	I1219 02:25:45.245231   10286 addons.go:239] Setting addon registry=true in "addons-791857"
	I1219 02:25:45.245256   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.243091   10286 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-791857"
	I1219 02:25:45.245271   10286 out.go:179] * Verifying Kubernetes components...
	I1219 02:25:45.244142   10286 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-791857"
	I1219 02:25:45.245398   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.245530   10286 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-791857"
	I1219 02:25:45.245614   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.245838   10286 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-791857"
	I1219 02:25:45.243171   10286 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-791857"
	I1219 02:25:45.246140   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.247840   10286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 02:25:45.253094   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.253125   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.253776   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.254394   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.256467   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.256864   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.257688   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.258503   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.270074   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.304554   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.307305   10286 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1219 02:25:45.308510   10286 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1219 02:25:45.308545   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1219 02:25:45.308731   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.316473   10286 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1219 02:25:45.317673   10286 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 02:25:45.317696   10286 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 02:25:45.317761   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.318212   10286 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1219 02:25:45.320722   10286 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1219 02:25:45.320744   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1219 02:25:45.320802   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.324802   10286 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1219 02:25:45.327568   10286 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1219 02:25:45.327634   10286 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1219 02:25:45.327734   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.345126   10286 addons.go:239] Setting addon default-storageclass=true in "addons-791857"
	I1219 02:25:45.345178   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.345640   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.347958   10286 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1219 02:25:45.350383   10286 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1219 02:25:45.351311   10286 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1219 02:25:45.351330   10286 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1219 02:25:45.351395   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.352835   10286 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-791857"
	I1219 02:25:45.352883   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.353342   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.361434   10286 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1219 02:25:45.361765   10286 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1219 02:25:45.364073   10286 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1219 02:25:45.364101   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1219 02:25:45.364183   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.367284   10286 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1219 02:25:45.367303   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1219 02:25:45.367364   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.371299   10286 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1219 02:25:45.372332   10286 out.go:179]   - Using image docker.io/registry:3.0.0
	I1219 02:25:45.373565   10286 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 02:25:45.373593   10286 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1219 02:25:45.374438   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1219 02:25:45.373827   10286 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1219 02:25:45.374536   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.374934   10286 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 02:25:45.374981   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 02:25:45.375061   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.375579   10286 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1219 02:25:45.377620   10286 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1219 02:25:45.377663   10286 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1219 02:25:45.379181   10286 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1219 02:25:45.379221   10286 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1219 02:25:45.380935   10286 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1219 02:25:45.381346   10286 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1219 02:25:45.381847   10286 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1219 02:25:45.382736   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1219 02:25:45.383042   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.383258   10286 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1219 02:25:45.383273   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1219 02:25:45.383363   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.384237   10286 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1219 02:25:45.385481   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.385972   10286 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1219 02:25:45.387215   10286 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1219 02:25:45.387278   10286 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1219 02:25:45.387303   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1219 02:25:45.387348   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.388819   10286 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1219 02:25:45.390206   10286 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1219 02:25:45.390229   10286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1219 02:25:45.390320   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.399909   10286 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 02:25:45.399934   10286 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 02:25:45.400000   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.403770   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	W1219 02:25:45.404349   10286 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1219 02:25:45.419011   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.425427   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.428491   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.429806   10286 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1219 02:25:45.431774   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.433528   10286 out.go:179]   - Using image docker.io/busybox:stable
	I1219 02:25:45.434585   10286 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1219 02:25:45.434599   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1219 02:25:45.434767   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.436421   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.436459   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.443340   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.448221   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.449954   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.453728   10286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1219 02:25:45.457853   10286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 02:25:45.464924   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	W1219 02:25:45.468202   10286 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1219 02:25:45.468235   10286 retry.go:31] will retry after 196.209396ms: ssh: handshake failed: EOF
	I1219 02:25:45.468427   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	W1219 02:25:45.470212   10286 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1219 02:25:45.470234   10286 retry.go:31] will retry after 154.168092ms: ssh: handshake failed: EOF
	I1219 02:25:45.471580   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.480007   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.546873   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1219 02:25:45.570341   10286 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1219 02:25:45.570363   10286 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1219 02:25:45.575466   10286 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 02:25:45.575580   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1219 02:25:45.593336   10286 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1219 02:25:45.593363   10286 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1219 02:25:45.593865   10286 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1219 02:25:45.593881   10286 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1219 02:25:45.600610   10286 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 02:25:45.600694   10286 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 02:25:45.605032   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1219 02:25:45.616411   10286 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1219 02:25:45.616439   10286 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1219 02:25:45.619451   10286 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1219 02:25:45.619535   10286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1219 02:25:45.627819   10286 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1219 02:25:45.627844   10286 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1219 02:25:45.634437   10286 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1219 02:25:45.634463   10286 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1219 02:25:45.639343   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1219 02:25:45.643760   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1219 02:25:45.646442   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1219 02:25:45.647353   10286 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 02:25:45.647371   10286 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 02:25:45.649779   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 02:25:45.660085   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1219 02:25:45.660541   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 02:25:45.679821   10286 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1219 02:25:45.679863   10286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1219 02:25:45.682056   10286 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1219 02:25:45.682080   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1219 02:25:45.698753   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 02:25:45.701074   10286 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1219 02:25:45.701102   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1219 02:25:45.711843   10286 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1219 02:25:45.711891   10286 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1219 02:25:45.757357   10286 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1219 02:25:45.757389   10286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1219 02:25:45.757786   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1219 02:25:45.767381   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1219 02:25:45.769534   10286 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1219 02:25:45.769560   10286 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1219 02:25:45.807847   10286 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1219 02:25:45.807879   10286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1219 02:25:45.811436   10286 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1219 02:25:45.811461   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1219 02:25:45.818685   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1219 02:25:45.888449   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1219 02:25:45.901869   10286 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1219 02:25:45.901908   10286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1219 02:25:45.910025   10286 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1219 02:25:45.912039   10286 node_ready.go:35] waiting up to 6m0s for node "addons-791857" to be "Ready" ...
	I1219 02:25:45.944512   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1219 02:25:45.975635   10286 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1219 02:25:45.975665   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1219 02:25:46.050901   10286 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1219 02:25:46.050942   10286 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1219 02:25:46.119627   10286 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1219 02:25:46.119651   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1219 02:25:46.189653   10286 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1219 02:25:46.189797   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1219 02:25:46.251974   10286 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1219 02:25:46.252003   10286 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1219 02:25:46.317393   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1219 02:25:46.414450   10286 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-791857" context rescaled to 1 replicas
	I1219 02:25:46.655814   10286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.016427503s)
	I1219 02:25:46.655949   10286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.009468892s)
	I1219 02:25:46.656201   10286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.006364309s)
	W1219 02:25:46.684811   10286 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1219 02:25:46.713875   10286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.015073999s)
	I1219 02:25:46.713919   10286 addons.go:500] Verifying addon metrics-server=true in "addons-791857"
	I1219 02:25:46.714265   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:46.714559   10286 addons.go:500] Verifying addon registry=true in "addons-791857"
	I1219 02:25:46.714864   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:46.718470   10286 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-791857 service yakd-dashboard -n yakd-dashboard
	
	I1219 02:25:46.745682   10286 out.go:179] * Verifying registry addon...
	I1219 02:25:46.747771   10286 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1219 02:25:46.752498   10286 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1219 02:25:46.752667   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:47.251217   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:47.465988   10286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.577480598s)
	W1219 02:25:47.466026   10286 addons.go:479] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1219 02:25:47.466047   10286 retry.go:31] will retry after 257.944725ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1219 02:25:47.466146   10286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.521593847s)
	I1219 02:25:47.466184   10286 addons.go:500] Verifying addon ingress=true in "addons-791857"
	I1219 02:25:47.466470   10286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.149012992s)
	I1219 02:25:47.466502   10286 addons.go:500] Verifying addon csi-hostpath-driver=true in "addons-791857"
	I1219 02:25:47.466535   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:47.466808   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:47.494041   10286 out.go:179] * Verifying csi-hostpath-driver addon...
	I1219 02:25:47.494046   10286 out.go:179] * Verifying ingress addon...
	I1219 02:25:47.495678   10286 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1219 02:25:47.495869   10286 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1219 02:25:47.498787   10286 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1219 02:25:47.498806   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:47.498960   10286 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1219 02:25:47.498971   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:47.724772   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1219 02:25:47.751362   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1219 02:25:47.915016   10286 node_ready.go:57] node "addons-791857" has "Ready":"False" status (will retry)
	I1219 02:25:47.998813   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:47.998963   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:48.250864   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:48.498720   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:48.498839   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:48.751938   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:48.998754   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:48.998832   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:49.251391   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:49.499424   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:49.499585   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:49.751612   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1219 02:25:49.915196   10286 node_ready.go:57] node "addons-791857" has "Ready":"False" status (will retry)
	I1219 02:25:49.999428   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:49.999572   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:50.194823   10286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.470005656s)
	I1219 02:25:50.250675   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:50.500213   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:50.500333   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:50.751277   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:50.999691   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:50.999912   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:51.251113   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:51.499274   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:51.499315   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:51.751508   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:51.999096   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:51.999125   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:52.251697   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1219 02:25:52.414573   10286 node_ready.go:57] node "addons-791857" has "Ready":"False" status (will retry)
	I1219 02:25:52.499550   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:52.499601   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:52.750654   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:52.920812   10286 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1219 02:25:52.920873   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:52.939133   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:52.999204   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:52.999244   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:53.053895   10286 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1219 02:25:53.066973   10286 addons.go:239] Setting addon gcp-auth=true in "addons-791857"
	I1219 02:25:53.067026   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:53.067431   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:53.085183   10286 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1219 02:25:53.085280   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:53.103216   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:53.202906   10286 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1219 02:25:53.204436   10286 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1219 02:25:53.205857   10286 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1219 02:25:53.205878   10286 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1219 02:25:53.219607   10286 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1219 02:25:53.219632   10286 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1219 02:25:53.232531   10286 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1219 02:25:53.232553   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1219 02:25:53.245799   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1219 02:25:53.250297   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:53.499657   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:53.499686   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:53.549051   10286 addons.go:500] Verifying addon gcp-auth=true in "addons-791857"
	I1219 02:25:53.549410   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:53.570687   10286 out.go:179] * Verifying gcp-auth addon...
	I1219 02:25:53.572917   10286 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1219 02:25:53.600239   10286 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1219 02:25:53.600264   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:53.750696   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:53.999215   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:53.999291   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:54.075721   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:54.250408   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1219 02:25:54.415128   10286 node_ready.go:57] node "addons-791857" has "Ready":"False" status (will retry)
	I1219 02:25:54.498741   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:54.498737   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:54.576844   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:54.751561   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:54.999117   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:54.999267   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:55.075803   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:55.250351   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:55.498916   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:55.499156   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:55.576603   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:55.751367   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:55.998393   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:55.998539   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:56.075845   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:56.250637   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1219 02:25:56.415289   10286 node_ready.go:57] node "addons-791857" has "Ready":"False" status (will retry)
	I1219 02:25:56.498858   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:56.499014   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:56.576277   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:56.751430   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:56.998722   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:56.999005   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:57.076318   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:57.250994   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:57.499466   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:57.499479   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:57.575884   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:57.750821   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:57.999311   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:57.999407   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:58.076062   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:58.251757   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:58.414739   10286 node_ready.go:49] node "addons-791857" is "Ready"
	I1219 02:25:58.414769   10286 node_ready.go:38] duration metric: took 12.502696641s for node "addons-791857" to be "Ready" ...
	I1219 02:25:58.414782   10286 api_server.go:52] waiting for apiserver process to appear ...
	I1219 02:25:58.414830   10286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 02:25:58.430392   10286 api_server.go:72] duration metric: took 13.187531738s to wait for apiserver process to appear ...
	I1219 02:25:58.430444   10286 api_server.go:88] waiting for apiserver healthz status ...
	I1219 02:25:58.430470   10286 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1219 02:25:58.434504   10286 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1219 02:25:58.435458   10286 api_server.go:141] control plane version: v1.34.3
	I1219 02:25:58.435481   10286 api_server.go:131] duration metric: took 5.028863ms to wait for apiserver health ...
	I1219 02:25:58.435489   10286 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 02:25:58.439298   10286 system_pods.go:59] 20 kube-system pods found
	I1219 02:25:58.439325   10286 system_pods.go:61] "amd-gpu-device-plugin-j2hvw" [0bdaaa68-7ce6-41e2-8ef0-41bbe9cc8cbf] Pending
	I1219 02:25:58.439334   10286 system_pods.go:61] "coredns-66bc5c9577-w88lw" [4b95f3bb-adda-464d-acf4-50575055445a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 02:25:58.439340   10286 system_pods.go:61] "csi-hostpath-attacher-0" [e0fda27e-619f-40ee-a16b-42b17bf27f67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1219 02:25:58.439347   10286 system_pods.go:61] "csi-hostpath-resizer-0" [8d778eab-a0e4-47a8-a6de-29401f6be9a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1219 02:25:58.439353   10286 system_pods.go:61] "csi-hostpathplugin-stf22" [937a1596-c0e8-4cfd-acc7-e0cb17aadc8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1219 02:25:58.439357   10286 system_pods.go:61] "etcd-addons-791857" [686fcc6a-7965-4304-8858-41d979a138fc] Running
	I1219 02:25:58.439361   10286 system_pods.go:61] "kindnet-hdbwg" [7018db27-6306-4e23-8dc9-bf41d6198195] Running
	I1219 02:25:58.439366   10286 system_pods.go:61] "kube-apiserver-addons-791857" [f5ce6454-689d-4a6a-b221-cc7263395975] Running
	I1219 02:25:58.439372   10286 system_pods.go:61] "kube-controller-manager-addons-791857" [68702782-1743-429b-a9dc-94c63b062f7a] Running
	I1219 02:25:58.439384   10286 system_pods.go:61] "kube-ingress-dns-minikube" [c27bf96b-ebf9-426c-b1ba-385d03cbd356] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1219 02:25:58.439391   10286 system_pods.go:61] "kube-proxy-7g9j9" [b514b755-f501-4933-8719-43c3fdfc2d33] Running
	I1219 02:25:58.439395   10286 system_pods.go:61] "kube-scheduler-addons-791857" [053aa4f2-f0c7-4742-acc9-ca29276ca941] Running
	I1219 02:25:58.439399   10286 system_pods.go:61] "metrics-server-85b7d694d7-dnphb" [4e7c7cf4-66fc-44e5-b383-c17b49201f15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 02:25:58.439404   10286 system_pods.go:61] "nvidia-device-plugin-daemonset-9ngs4" [e76215e0-394e-4006-8d6b-bc14b485dc1f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1219 02:25:58.439411   10286 system_pods.go:61] "registry-6b586f9694-j2n8x" [2570cf1e-ddd4-4270-8adc-05df916b18a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1219 02:25:58.439415   10286 system_pods.go:61] "registry-creds-764b6fb674-xdlrg" [5bd93e94-0edf-490d-a978-5aa0fe38b999] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1219 02:25:58.439419   10286 system_pods.go:61] "registry-proxy-wsz68" [1af55b65-71a7-4559-b943-5db589afbf6f] Pending
	I1219 02:25:58.439423   10286 system_pods.go:61] "snapshot-controller-7d9fbc56b8-d42wc" [13df303d-c677-4d6f-9291-bfd820bf6b74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:58.439432   10286 system_pods.go:61] "snapshot-controller-7d9fbc56b8-v6xz5" [d91f0d4f-4cb4-4cf1-a698-688320a8623d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:58.439442   10286 system_pods.go:61] "storage-provisioner" [048f74f6-54f2-4dc5-9893-45ede6f7b3fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 02:25:58.439449   10286 system_pods.go:74] duration metric: took 3.954747ms to wait for pod list to return data ...
	I1219 02:25:58.439459   10286 default_sa.go:34] waiting for default service account to be created ...
	I1219 02:25:58.441294   10286 default_sa.go:45] found service account: "default"
	I1219 02:25:58.441314   10286 default_sa.go:55] duration metric: took 1.84934ms for default service account to be created ...
	I1219 02:25:58.441324   10286 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 02:25:58.448000   10286 system_pods.go:86] 20 kube-system pods found
	I1219 02:25:58.448030   10286 system_pods.go:89] "amd-gpu-device-plugin-j2hvw" [0bdaaa68-7ce6-41e2-8ef0-41bbe9cc8cbf] Pending
	I1219 02:25:58.448040   10286 system_pods.go:89] "coredns-66bc5c9577-w88lw" [4b95f3bb-adda-464d-acf4-50575055445a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 02:25:58.448049   10286 system_pods.go:89] "csi-hostpath-attacher-0" [e0fda27e-619f-40ee-a16b-42b17bf27f67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1219 02:25:58.448059   10286 system_pods.go:89] "csi-hostpath-resizer-0" [8d778eab-a0e4-47a8-a6de-29401f6be9a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1219 02:25:58.448067   10286 system_pods.go:89] "csi-hostpathplugin-stf22" [937a1596-c0e8-4cfd-acc7-e0cb17aadc8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1219 02:25:58.448075   10286 system_pods.go:89] "etcd-addons-791857" [686fcc6a-7965-4304-8858-41d979a138fc] Running
	I1219 02:25:58.448081   10286 system_pods.go:89] "kindnet-hdbwg" [7018db27-6306-4e23-8dc9-bf41d6198195] Running
	I1219 02:25:58.448086   10286 system_pods.go:89] "kube-apiserver-addons-791857" [f5ce6454-689d-4a6a-b221-cc7263395975] Running
	I1219 02:25:58.448091   10286 system_pods.go:89] "kube-controller-manager-addons-791857" [68702782-1743-429b-a9dc-94c63b062f7a] Running
	I1219 02:25:58.448099   10286 system_pods.go:89] "kube-ingress-dns-minikube" [c27bf96b-ebf9-426c-b1ba-385d03cbd356] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1219 02:25:58.448116   10286 system_pods.go:89] "kube-proxy-7g9j9" [b514b755-f501-4933-8719-43c3fdfc2d33] Running
	I1219 02:25:58.448122   10286 system_pods.go:89] "kube-scheduler-addons-791857" [053aa4f2-f0c7-4742-acc9-ca29276ca941] Running
	I1219 02:25:58.448129   10286 system_pods.go:89] "metrics-server-85b7d694d7-dnphb" [4e7c7cf4-66fc-44e5-b383-c17b49201f15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 02:25:58.448137   10286 system_pods.go:89] "nvidia-device-plugin-daemonset-9ngs4" [e76215e0-394e-4006-8d6b-bc14b485dc1f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1219 02:25:58.448144   10286 system_pods.go:89] "registry-6b586f9694-j2n8x" [2570cf1e-ddd4-4270-8adc-05df916b18a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1219 02:25:58.448152   10286 system_pods.go:89] "registry-creds-764b6fb674-xdlrg" [5bd93e94-0edf-490d-a978-5aa0fe38b999] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1219 02:25:58.448157   10286 system_pods.go:89] "registry-proxy-wsz68" [1af55b65-71a7-4559-b943-5db589afbf6f] Pending
	I1219 02:25:58.448168   10286 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d42wc" [13df303d-c677-4d6f-9291-bfd820bf6b74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:58.448177   10286 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v6xz5" [d91f0d4f-4cb4-4cf1-a698-688320a8623d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:58.448184   10286 system_pods.go:89] "storage-provisioner" [048f74f6-54f2-4dc5-9893-45ede6f7b3fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 02:25:58.448201   10286 retry.go:31] will retry after 247.149321ms: missing components: kube-dns
	I1219 02:25:58.499402   10286 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1219 02:25:58.499416   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:58.499429   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:58.600338   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:58.703364   10286 system_pods.go:86] 20 kube-system pods found
	I1219 02:25:58.703406   10286 system_pods.go:89] "amd-gpu-device-plugin-j2hvw" [0bdaaa68-7ce6-41e2-8ef0-41bbe9cc8cbf] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1219 02:25:58.703429   10286 system_pods.go:89] "coredns-66bc5c9577-w88lw" [4b95f3bb-adda-464d-acf4-50575055445a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 02:25:58.703441   10286 system_pods.go:89] "csi-hostpath-attacher-0" [e0fda27e-619f-40ee-a16b-42b17bf27f67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1219 02:25:58.703450   10286 system_pods.go:89] "csi-hostpath-resizer-0" [8d778eab-a0e4-47a8-a6de-29401f6be9a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1219 02:25:58.703465   10286 system_pods.go:89] "csi-hostpathplugin-stf22" [937a1596-c0e8-4cfd-acc7-e0cb17aadc8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1219 02:25:58.703471   10286 system_pods.go:89] "etcd-addons-791857" [686fcc6a-7965-4304-8858-41d979a138fc] Running
	I1219 02:25:58.703483   10286 system_pods.go:89] "kindnet-hdbwg" [7018db27-6306-4e23-8dc9-bf41d6198195] Running
	I1219 02:25:58.703489   10286 system_pods.go:89] "kube-apiserver-addons-791857" [f5ce6454-689d-4a6a-b221-cc7263395975] Running
	I1219 02:25:58.703509   10286 system_pods.go:89] "kube-controller-manager-addons-791857" [68702782-1743-429b-a9dc-94c63b062f7a] Running
	I1219 02:25:58.703523   10286 system_pods.go:89] "kube-ingress-dns-minikube" [c27bf96b-ebf9-426c-b1ba-385d03cbd356] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1219 02:25:58.703528   10286 system_pods.go:89] "kube-proxy-7g9j9" [b514b755-f501-4933-8719-43c3fdfc2d33] Running
	I1219 02:25:58.703656   10286 system_pods.go:89] "kube-scheduler-addons-791857" [053aa4f2-f0c7-4742-acc9-ca29276ca941] Running
	I1219 02:25:58.703672   10286 system_pods.go:89] "metrics-server-85b7d694d7-dnphb" [4e7c7cf4-66fc-44e5-b383-c17b49201f15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 02:25:58.703680   10286 system_pods.go:89] "nvidia-device-plugin-daemonset-9ngs4" [e76215e0-394e-4006-8d6b-bc14b485dc1f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1219 02:25:58.703693   10286 system_pods.go:89] "registry-6b586f9694-j2n8x" [2570cf1e-ddd4-4270-8adc-05df916b18a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1219 02:25:58.703722   10286 system_pods.go:89] "registry-creds-764b6fb674-xdlrg" [5bd93e94-0edf-490d-a978-5aa0fe38b999] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1219 02:25:58.703731   10286 system_pods.go:89] "registry-proxy-wsz68" [1af55b65-71a7-4559-b943-5db589afbf6f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1219 02:25:58.703747   10286 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d42wc" [13df303d-c677-4d6f-9291-bfd820bf6b74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:58.703761   10286 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v6xz5" [d91f0d4f-4cb4-4cf1-a698-688320a8623d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:58.703773   10286 system_pods.go:89] "storage-provisioner" [048f74f6-54f2-4dc5-9893-45ede6f7b3fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 02:25:58.703790   10286 retry.go:31] will retry after 372.451905ms: missing components: kube-dns
	I1219 02:25:58.800640   10286 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1219 02:25:58.800665   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:59.006163   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:59.006388   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:59.076914   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:59.081067   10286 system_pods.go:86] 20 kube-system pods found
	I1219 02:25:59.081104   10286 system_pods.go:89] "amd-gpu-device-plugin-j2hvw" [0bdaaa68-7ce6-41e2-8ef0-41bbe9cc8cbf] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1219 02:25:59.081114   10286 system_pods.go:89] "coredns-66bc5c9577-w88lw" [4b95f3bb-adda-464d-acf4-50575055445a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 02:25:59.081122   10286 system_pods.go:89] "csi-hostpath-attacher-0" [e0fda27e-619f-40ee-a16b-42b17bf27f67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1219 02:25:59.081131   10286 system_pods.go:89] "csi-hostpath-resizer-0" [8d778eab-a0e4-47a8-a6de-29401f6be9a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1219 02:25:59.081149   10286 system_pods.go:89] "csi-hostpathplugin-stf22" [937a1596-c0e8-4cfd-acc7-e0cb17aadc8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1219 02:25:59.081156   10286 system_pods.go:89] "etcd-addons-791857" [686fcc6a-7965-4304-8858-41d979a138fc] Running
	I1219 02:25:59.081163   10286 system_pods.go:89] "kindnet-hdbwg" [7018db27-6306-4e23-8dc9-bf41d6198195] Running
	I1219 02:25:59.081168   10286 system_pods.go:89] "kube-apiserver-addons-791857" [f5ce6454-689d-4a6a-b221-cc7263395975] Running
	I1219 02:25:59.081173   10286 system_pods.go:89] "kube-controller-manager-addons-791857" [68702782-1743-429b-a9dc-94c63b062f7a] Running
	I1219 02:25:59.081183   10286 system_pods.go:89] "kube-ingress-dns-minikube" [c27bf96b-ebf9-426c-b1ba-385d03cbd356] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1219 02:25:59.081188   10286 system_pods.go:89] "kube-proxy-7g9j9" [b514b755-f501-4933-8719-43c3fdfc2d33] Running
	I1219 02:25:59.081194   10286 system_pods.go:89] "kube-scheduler-addons-791857" [053aa4f2-f0c7-4742-acc9-ca29276ca941] Running
	I1219 02:25:59.081204   10286 system_pods.go:89] "metrics-server-85b7d694d7-dnphb" [4e7c7cf4-66fc-44e5-b383-c17b49201f15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 02:25:59.081212   10286 system_pods.go:89] "nvidia-device-plugin-daemonset-9ngs4" [e76215e0-394e-4006-8d6b-bc14b485dc1f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1219 02:25:59.081220   10286 system_pods.go:89] "registry-6b586f9694-j2n8x" [2570cf1e-ddd4-4270-8adc-05df916b18a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1219 02:25:59.081227   10286 system_pods.go:89] "registry-creds-764b6fb674-xdlrg" [5bd93e94-0edf-490d-a978-5aa0fe38b999] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1219 02:25:59.081234   10286 system_pods.go:89] "registry-proxy-wsz68" [1af55b65-71a7-4559-b943-5db589afbf6f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1219 02:25:59.081243   10286 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d42wc" [13df303d-c677-4d6f-9291-bfd820bf6b74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:59.081252   10286 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v6xz5" [d91f0d4f-4cb4-4cf1-a698-688320a8623d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:59.081260   10286 system_pods.go:89] "storage-provisioner" [048f74f6-54f2-4dc5-9893-45ede6f7b3fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 02:25:59.081276   10286 retry.go:31] will retry after 472.328916ms: missing components: kube-dns
	I1219 02:25:59.252576   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:59.499484   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:59.499804   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:59.558399   10286 system_pods.go:86] 20 kube-system pods found
	I1219 02:25:59.558440   10286 system_pods.go:89] "amd-gpu-device-plugin-j2hvw" [0bdaaa68-7ce6-41e2-8ef0-41bbe9cc8cbf] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1219 02:25:59.558452   10286 system_pods.go:89] "coredns-66bc5c9577-w88lw" [4b95f3bb-adda-464d-acf4-50575055445a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 02:25:59.558463   10286 system_pods.go:89] "csi-hostpath-attacher-0" [e0fda27e-619f-40ee-a16b-42b17bf27f67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1219 02:25:59.558471   10286 system_pods.go:89] "csi-hostpath-resizer-0" [8d778eab-a0e4-47a8-a6de-29401f6be9a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1219 02:25:59.558481   10286 system_pods.go:89] "csi-hostpathplugin-stf22" [937a1596-c0e8-4cfd-acc7-e0cb17aadc8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1219 02:25:59.558487   10286 system_pods.go:89] "etcd-addons-791857" [686fcc6a-7965-4304-8858-41d979a138fc] Running
	I1219 02:25:59.558493   10286 system_pods.go:89] "kindnet-hdbwg" [7018db27-6306-4e23-8dc9-bf41d6198195] Running
	I1219 02:25:59.558499   10286 system_pods.go:89] "kube-apiserver-addons-791857" [f5ce6454-689d-4a6a-b221-cc7263395975] Running
	I1219 02:25:59.558504   10286 system_pods.go:89] "kube-controller-manager-addons-791857" [68702782-1743-429b-a9dc-94c63b062f7a] Running
	I1219 02:25:59.558513   10286 system_pods.go:89] "kube-ingress-dns-minikube" [c27bf96b-ebf9-426c-b1ba-385d03cbd356] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1219 02:25:59.558524   10286 system_pods.go:89] "kube-proxy-7g9j9" [b514b755-f501-4933-8719-43c3fdfc2d33] Running
	I1219 02:25:59.558530   10286 system_pods.go:89] "kube-scheduler-addons-791857" [053aa4f2-f0c7-4742-acc9-ca29276ca941] Running
	I1219 02:25:59.558542   10286 system_pods.go:89] "metrics-server-85b7d694d7-dnphb" [4e7c7cf4-66fc-44e5-b383-c17b49201f15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 02:25:59.558552   10286 system_pods.go:89] "nvidia-device-plugin-daemonset-9ngs4" [e76215e0-394e-4006-8d6b-bc14b485dc1f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1219 02:25:59.558564   10286 system_pods.go:89] "registry-6b586f9694-j2n8x" [2570cf1e-ddd4-4270-8adc-05df916b18a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1219 02:25:59.558572   10286 system_pods.go:89] "registry-creds-764b6fb674-xdlrg" [5bd93e94-0edf-490d-a978-5aa0fe38b999] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1219 02:25:59.558579   10286 system_pods.go:89] "registry-proxy-wsz68" [1af55b65-71a7-4559-b943-5db589afbf6f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1219 02:25:59.558587   10286 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d42wc" [13df303d-c677-4d6f-9291-bfd820bf6b74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:59.558603   10286 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v6xz5" [d91f0d4f-4cb4-4cf1-a698-688320a8623d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:59.558616   10286 system_pods.go:89] "storage-provisioner" [048f74f6-54f2-4dc5-9893-45ede6f7b3fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 02:25:59.558633   10286 retry.go:31] will retry after 389.981082ms: missing components: kube-dns
	I1219 02:25:59.577129   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:59.751868   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:59.953387   10286 system_pods.go:86] 20 kube-system pods found
	I1219 02:25:59.953428   10286 system_pods.go:89] "amd-gpu-device-plugin-j2hvw" [0bdaaa68-7ce6-41e2-8ef0-41bbe9cc8cbf] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1219 02:25:59.953436   10286 system_pods.go:89] "coredns-66bc5c9577-w88lw" [4b95f3bb-adda-464d-acf4-50575055445a] Running
	I1219 02:25:59.953448   10286 system_pods.go:89] "csi-hostpath-attacher-0" [e0fda27e-619f-40ee-a16b-42b17bf27f67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1219 02:25:59.953471   10286 system_pods.go:89] "csi-hostpath-resizer-0" [8d778eab-a0e4-47a8-a6de-29401f6be9a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1219 02:25:59.953484   10286 system_pods.go:89] "csi-hostpathplugin-stf22" [937a1596-c0e8-4cfd-acc7-e0cb17aadc8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1219 02:25:59.953491   10286 system_pods.go:89] "etcd-addons-791857" [686fcc6a-7965-4304-8858-41d979a138fc] Running
	I1219 02:25:59.953502   10286 system_pods.go:89] "kindnet-hdbwg" [7018db27-6306-4e23-8dc9-bf41d6198195] Running
	I1219 02:25:59.953508   10286 system_pods.go:89] "kube-apiserver-addons-791857" [f5ce6454-689d-4a6a-b221-cc7263395975] Running
	I1219 02:25:59.953514   10286 system_pods.go:89] "kube-controller-manager-addons-791857" [68702782-1743-429b-a9dc-94c63b062f7a] Running
	I1219 02:25:59.953526   10286 system_pods.go:89] "kube-ingress-dns-minikube" [c27bf96b-ebf9-426c-b1ba-385d03cbd356] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1219 02:25:59.953531   10286 system_pods.go:89] "kube-proxy-7g9j9" [b514b755-f501-4933-8719-43c3fdfc2d33] Running
	I1219 02:25:59.953537   10286 system_pods.go:89] "kube-scheduler-addons-791857" [053aa4f2-f0c7-4742-acc9-ca29276ca941] Running
	I1219 02:25:59.953545   10286 system_pods.go:89] "metrics-server-85b7d694d7-dnphb" [4e7c7cf4-66fc-44e5-b383-c17b49201f15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 02:25:59.953552   10286 system_pods.go:89] "nvidia-device-plugin-daemonset-9ngs4" [e76215e0-394e-4006-8d6b-bc14b485dc1f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1219 02:25:59.953558   10286 system_pods.go:89] "registry-6b586f9694-j2n8x" [2570cf1e-ddd4-4270-8adc-05df916b18a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1219 02:25:59.953563   10286 system_pods.go:89] "registry-creds-764b6fb674-xdlrg" [5bd93e94-0edf-490d-a978-5aa0fe38b999] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1219 02:25:59.953568   10286 system_pods.go:89] "registry-proxy-wsz68" [1af55b65-71a7-4559-b943-5db589afbf6f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1219 02:25:59.953581   10286 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d42wc" [13df303d-c677-4d6f-9291-bfd820bf6b74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:59.953592   10286 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v6xz5" [d91f0d4f-4cb4-4cf1-a698-688320a8623d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:59.953598   10286 system_pods.go:89] "storage-provisioner" [048f74f6-54f2-4dc5-9893-45ede6f7b3fc] Running
	I1219 02:25:59.953611   10286 system_pods.go:126] duration metric: took 1.51227941s to wait for k8s-apps to be running ...
	I1219 02:25:59.953625   10286 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 02:25:59.953678   10286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 02:25:59.970998   10286 system_svc.go:56] duration metric: took 17.366101ms WaitForService to wait for kubelet
	I1219 02:25:59.971032   10286 kubeadm.go:587] duration metric: took 14.728176114s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 02:25:59.971055   10286 node_conditions.go:102] verifying NodePressure condition ...
	I1219 02:25:59.974526   10286 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 02:25:59.974559   10286 node_conditions.go:123] node cpu capacity is 8
	I1219 02:25:59.974579   10286 node_conditions.go:105] duration metric: took 3.517868ms to run NodePressure ...
	I1219 02:25:59.974593   10286 start.go:242] waiting for startup goroutines ...
	I1219 02:26:00.000141   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:00.000141   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:00.076185   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:00.251781   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:00.499091   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:00.499101   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:00.576464   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:00.752043   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:00.999068   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:00.999102   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:01.076649   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:01.251904   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:01.498930   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:01.498949   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:01.577020   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:01.750800   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:02.015127   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:02.015223   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:02.115688   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:02.251760   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:02.499932   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:02.500174   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:02.576821   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:02.751117   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:02.999252   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:02.999390   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:03.076099   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:03.251663   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:03.499990   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:03.500097   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:03.600751   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:03.751459   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:03.999869   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:04.000010   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:04.076471   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:04.251517   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:04.498889   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:04.499094   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:04.575279   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:04.800060   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:04.999128   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:04.999244   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:05.099907   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:05.250486   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:05.500380   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:05.500409   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:05.576132   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:05.750955   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:05.999209   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:05.999307   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:06.076312   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:06.251400   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:06.499342   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:06.499445   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:06.575947   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:06.750393   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:06.999769   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:06.999772   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:07.076167   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:07.251443   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:07.501760   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:07.501858   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:07.577008   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:07.750895   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:08.000789   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:08.000815   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:08.076785   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:08.252076   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:08.499352   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:08.499461   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:08.575998   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:08.843625   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:09.018275   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:09.018593   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:09.118046   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:09.251440   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:09.499906   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:09.500076   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:09.577176   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:09.751975   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:10.000415   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:10.000537   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:10.076345   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:10.251354   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:10.499557   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:10.499767   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:10.576312   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:10.751271   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:10.999593   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:10.999592   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:11.076415   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:11.251530   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:11.500211   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:11.500390   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:11.576753   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:11.751254   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:11.999384   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:11.999547   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:12.100011   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:12.251181   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:12.499955   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:12.499983   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:12.576404   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:12.751172   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:12.999587   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:12.999693   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:13.076596   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:13.251656   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:13.500593   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:13.500738   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:13.576580   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:13.751655   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:14.000588   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:14.000761   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:14.102605   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:14.250637   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:14.498515   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:14.498523   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:14.575987   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:14.751065   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:14.999392   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:14.999420   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:15.099693   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:15.251410   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:15.499918   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:15.500056   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:15.576403   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:15.751550   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:16.000138   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:16.000148   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:16.076345   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:16.252254   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:16.499108   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:16.499143   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:16.576569   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:16.751848   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:17.001255   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:17.001316   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:17.076287   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:17.250960   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:17.499151   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:17.499222   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:17.575568   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:17.751377   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:17.999645   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:17.999683   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:18.075824   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:18.250382   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:18.499954   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:18.499976   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:18.576797   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:18.750490   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:19.004611   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:19.004952   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:19.105320   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:19.251387   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:19.499807   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:19.500045   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:19.576401   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:19.751486   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:20.000100   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:20.000108   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:20.076804   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:20.251265   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:20.499372   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:20.499406   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:20.576033   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:20.750808   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:20.998952   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:20.998996   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:21.100110   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:21.250871   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:21.498863   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:21.498898   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:21.576330   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:21.751376   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:21.998827   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:21.998828   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:22.076339   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:22.251230   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:22.500084   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:22.500102   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:22.576624   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:22.751636   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:23.000488   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:23.000825   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:23.076792   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:23.252250   10286 kapi.go:107] duration metric: took 36.504476539s to wait for kubernetes.io/minikube-addons=registry ...
	I1219 02:26:23.499386   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:23.500084   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:23.576666   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:23.999821   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:23.999952   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:24.076286   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:24.500286   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:24.500422   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:24.576006   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:24.999294   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:24.999530   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:25.098936   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:25.499176   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:25.499280   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:25.575888   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:25.999461   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:25.999461   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:26.075503   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:26.499267   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:26.499404   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:26.575581   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:27.000240   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:27.000256   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:27.076484   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:27.498943   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:27.498982   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:27.576647   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:27.999889   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:27.999892   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:28.076936   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:28.499298   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:28.499334   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:28.575654   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:28.999274   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:28.999276   10286 kapi.go:107] duration metric: took 41.503606159s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1219 02:26:29.075441   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:29.500004   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:29.576763   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:29.999898   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:30.100968   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:30.499400   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:30.576234   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:31.000185   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:31.079107   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:31.499680   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:31.576400   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:32.000112   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:32.076613   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:32.499949   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:32.576489   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:33.000224   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:33.075924   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:33.500781   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:33.576115   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:33.999454   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:34.099985   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:34.499278   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:34.575750   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:34.999884   10286 kapi.go:107] duration metric: took 47.50401219s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1219 02:26:35.076308   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:35.576577   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:36.075466   10286 kapi.go:107] duration metric: took 42.502549881s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1219 02:26:36.076920   10286 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-791857 cluster.
	I1219 02:26:36.078108   10286 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1219 02:26:36.079217   10286 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1219 02:26:36.080346   10286 out.go:179] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, registry-creds, ingress-dns, nvidia-device-plugin, storage-provisioner, default-storageclass, yakd, metrics-server, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1219 02:26:36.082203   10286 addons.go:546] duration metric: took 50.839312711s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner registry-creds ingress-dns nvidia-device-plugin storage-provisioner default-storageclass yakd metrics-server inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1219 02:26:36.082247   10286 start.go:247] waiting for cluster config update ...
	I1219 02:26:36.082272   10286 start.go:256] writing updated cluster config ...
	I1219 02:26:36.082506   10286 ssh_runner.go:195] Run: rm -f paused
	I1219 02:26:36.086366   10286 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 02:26:36.089074   10286 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w88lw" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:36.092645   10286 pod_ready.go:94] pod "coredns-66bc5c9577-w88lw" is "Ready"
	I1219 02:26:36.092662   10286 pod_ready.go:86] duration metric: took 3.569548ms for pod "coredns-66bc5c9577-w88lw" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:36.094198   10286 pod_ready.go:83] waiting for pod "etcd-addons-791857" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:36.097235   10286 pod_ready.go:94] pod "etcd-addons-791857" is "Ready"
	I1219 02:26:36.097253   10286 pod_ready.go:86] duration metric: took 3.03834ms for pod "etcd-addons-791857" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:36.098892   10286 pod_ready.go:83] waiting for pod "kube-apiserver-addons-791857" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:36.101997   10286 pod_ready.go:94] pod "kube-apiserver-addons-791857" is "Ready"
	I1219 02:26:36.102017   10286 pod_ready.go:86] duration metric: took 3.103653ms for pod "kube-apiserver-addons-791857" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:36.103542   10286 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-791857" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:36.490223   10286 pod_ready.go:94] pod "kube-controller-manager-addons-791857" is "Ready"
	I1219 02:26:36.490251   10286 pod_ready.go:86] duration metric: took 386.690084ms for pod "kube-controller-manager-addons-791857" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:36.690225   10286 pod_ready.go:83] waiting for pod "kube-proxy-7g9j9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:37.089844   10286 pod_ready.go:94] pod "kube-proxy-7g9j9" is "Ready"
	I1219 02:26:37.089868   10286 pod_ready.go:86] duration metric: took 399.618352ms for pod "kube-proxy-7g9j9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:37.289909   10286 pod_ready.go:83] waiting for pod "kube-scheduler-addons-791857" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:37.690288   10286 pod_ready.go:94] pod "kube-scheduler-addons-791857" is "Ready"
	I1219 02:26:37.690314   10286 pod_ready.go:86] duration metric: took 400.378337ms for pod "kube-scheduler-addons-791857" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:37.690326   10286 pod_ready.go:40] duration metric: took 1.603940629s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 02:26:37.732861   10286 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 02:26:37.734671   10286 out.go:179] * Done! kubectl is now configured to use "addons-791857" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 02:29:19 addons-791857 crio[771]: time="2025-12-19T02:29:19.860847638Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-pjcjr/POD" id=1c70ab1d-6ff3-43b9-bcdb-50eadeaacd85 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 19 02:29:19 addons-791857 crio[771]: time="2025-12-19T02:29:19.860927224Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 02:29:19 addons-791857 crio[771]: time="2025-12-19T02:29:19.868188157Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-pjcjr Namespace:default ID:f80ed5a0af396238de7d12ed279d099ab6c14f0d6043c12f4fbdc6b4ef7ff15e UID:df53b82f-8343-4d95-96b2-bba6866822cb NetNS:/var/run/netns/bc8df1da-82c0-452e-8d2a-174b298200f3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000788600}] Aliases:map[]}"
	Dec 19 02:29:19 addons-791857 crio[771]: time="2025-12-19T02:29:19.86823005Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-pjcjr to CNI network \"kindnet\" (type=ptp)"
	Dec 19 02:29:19 addons-791857 crio[771]: time="2025-12-19T02:29:19.880151972Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-pjcjr Namespace:default ID:f80ed5a0af396238de7d12ed279d099ab6c14f0d6043c12f4fbdc6b4ef7ff15e UID:df53b82f-8343-4d95-96b2-bba6866822cb NetNS:/var/run/netns/bc8df1da-82c0-452e-8d2a-174b298200f3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000788600}] Aliases:map[]}"
	Dec 19 02:29:19 addons-791857 crio[771]: time="2025-12-19T02:29:19.8803522Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-pjcjr for CNI network kindnet (type=ptp)"
	Dec 19 02:29:19 addons-791857 crio[771]: time="2025-12-19T02:29:19.882000078Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 19 02:29:19 addons-791857 crio[771]: time="2025-12-19T02:29:19.883593713Z" level=info msg="Ran pod sandbox f80ed5a0af396238de7d12ed279d099ab6c14f0d6043c12f4fbdc6b4ef7ff15e with infra container: default/hello-world-app-5d498dc89-pjcjr/POD" id=1c70ab1d-6ff3-43b9-bcdb-50eadeaacd85 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 19 02:29:19 addons-791857 crio[771]: time="2025-12-19T02:29:19.886043695Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=50fa142d-9b08-4294-b8dd-24b7b5100e5c name=/runtime.v1.ImageService/ImageStatus
	Dec 19 02:29:19 addons-791857 crio[771]: time="2025-12-19T02:29:19.886197882Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=50fa142d-9b08-4294-b8dd-24b7b5100e5c name=/runtime.v1.ImageService/ImageStatus
	Dec 19 02:29:19 addons-791857 crio[771]: time="2025-12-19T02:29:19.886242913Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=50fa142d-9b08-4294-b8dd-24b7b5100e5c name=/runtime.v1.ImageService/ImageStatus
	Dec 19 02:29:19 addons-791857 crio[771]: time="2025-12-19T02:29:19.886925122Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=7556f9a4-4653-4d7f-8e3a-e27ddb7e31b8 name=/runtime.v1.ImageService/PullImage
	Dec 19 02:29:19 addons-791857 crio[771]: time="2025-12-19T02:29:19.891774173Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 19 02:29:20 addons-791857 crio[771]: time="2025-12-19T02:29:20.279808761Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=7556f9a4-4653-4d7f-8e3a-e27ddb7e31b8 name=/runtime.v1.ImageService/PullImage
	Dec 19 02:29:20 addons-791857 crio[771]: time="2025-12-19T02:29:20.280405925Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=bc395819-06fc-42a5-82d0-3238f8a2114b name=/runtime.v1.ImageService/ImageStatus
	Dec 19 02:29:20 addons-791857 crio[771]: time="2025-12-19T02:29:20.282076386Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=6e4457cd-5a76-4121-a915-5bcd39aadf6b name=/runtime.v1.ImageService/ImageStatus
	Dec 19 02:29:20 addons-791857 crio[771]: time="2025-12-19T02:29:20.28565163Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-pjcjr/hello-world-app" id=62aca2ba-5501-465a-9acb-fa958c7febc4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 02:29:20 addons-791857 crio[771]: time="2025-12-19T02:29:20.285841542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 02:29:20 addons-791857 crio[771]: time="2025-12-19T02:29:20.292784256Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 02:29:20 addons-791857 crio[771]: time="2025-12-19T02:29:20.293008489Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f00e94f86a4e8f96f90240903a4e521b8d68dfe7dcaa6e194c061187c9d07d97/merged/etc/passwd: no such file or directory"
	Dec 19 02:29:20 addons-791857 crio[771]: time="2025-12-19T02:29:20.293038289Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f00e94f86a4e8f96f90240903a4e521b8d68dfe7dcaa6e194c061187c9d07d97/merged/etc/group: no such file or directory"
	Dec 19 02:29:20 addons-791857 crio[771]: time="2025-12-19T02:29:20.293342701Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 02:29:20 addons-791857 crio[771]: time="2025-12-19T02:29:20.331844398Z" level=info msg="Created container 13589b601ac29ba03b23a56ed9ba6110eff3dfb2047c26f4c0723406764ef3d3: default/hello-world-app-5d498dc89-pjcjr/hello-world-app" id=62aca2ba-5501-465a-9acb-fa958c7febc4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 02:29:20 addons-791857 crio[771]: time="2025-12-19T02:29:20.332562134Z" level=info msg="Starting container: 13589b601ac29ba03b23a56ed9ba6110eff3dfb2047c26f4c0723406764ef3d3" id=b352c068-c23d-4584-b15b-0ba346ea8290 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 02:29:20 addons-791857 crio[771]: time="2025-12-19T02:29:20.334682154Z" level=info msg="Started container" PID=9569 containerID=13589b601ac29ba03b23a56ed9ba6110eff3dfb2047c26f4c0723406764ef3d3 description=default/hello-world-app-5d498dc89-pjcjr/hello-world-app id=b352c068-c23d-4584-b15b-0ba346ea8290 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f80ed5a0af396238de7d12ed279d099ab6c14f0d6043c12f4fbdc6b4ef7ff15e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	13589b601ac29       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   f80ed5a0af396       hello-world-app-5d498dc89-pjcjr             default
	82083cc2b8ec2       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             2 minutes ago            Running             registry-creds                           0                   e89f1ac2f2ff6       registry-creds-764b6fb674-xdlrg             kube-system
	beb4cb0bfc246       public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c                                           2 minutes ago            Running             nginx                                    0                   8d7abe02345ff       nginx                                       default
	5f9c339e2a099       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   2d94480efc53e       busybox                                     default
	9ddd01031bdbf       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   c2d8b53d07769       gcp-auth-78565c9fb4-6bmz4                   gcp-auth
	18463208a59b6       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             2 minutes ago            Running             controller                               0                   f6b27f69afa7f       ingress-nginx-controller-85d4c799dd-qmd9h   ingress-nginx
	0ab375c325b23       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             2 minutes ago            Exited              patch                                    2                   e9e559cc42e5f       ingress-nginx-admission-patch-kl62v         ingress-nginx
	e7ab741310c71       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   50887a9e8812c       csi-hostpathplugin-stf22                    kube-system
	96a4c77bc9411       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   50887a9e8812c       csi-hostpathplugin-stf22                    kube-system
	3b4d17ba42562       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   50887a9e8812c       csi-hostpathplugin-stf22                    kube-system
	3da71007b4d24       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   50887a9e8812c       csi-hostpathplugin-stf22                    kube-system
	464d91d87ed3b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   50887a9e8812c       csi-hostpathplugin-stf22                    kube-system
	2af2c2fc8740d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            2 minutes ago            Running             gadget                                   0                   9300ee68707ba       gadget-j5dvh                                gadget
	3952448da55ae       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago            Running             registry-proxy                           0                   6f1c59bad19dd       registry-proxy-wsz68                        kube-system
	1bb6f09e7568e       nvcr.io/nvidia/k8s-device-plugin@sha256:c3c1a099015d1810c249ba294beaad656ce0354f7e8a77803dacabe60a4f8c9f                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   b0e343d8bbba4       nvidia-device-plugin-daemonset-9ngs4        kube-system
	889116c0a9d40       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   57c70d41b9de3       amd-gpu-device-plugin-j2hvw                 kube-system
	258da604e725d       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   76a5e0e088a99       snapshot-controller-7d9fbc56b8-v6xz5        kube-system
	0576bfa9d823e       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   50887a9e8812c       csi-hostpathplugin-stf22                    kube-system
	97ca5f9b244b7       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   821953c6b09fe       snapshot-controller-7d9fbc56b8-d42wc        kube-system
	fc1c8efae8677       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   3 minutes ago            Exited              create                                   0                   5ee9003edf82a       ingress-nginx-admission-create-l2d6q        ingress-nginx
	6784d80b9a465       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   6e6e9d2fbbc60       csi-hostpath-attacher-0                     kube-system
	88c063c48a86c       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   db2f968a6a8b0       csi-hostpath-resizer-0                      kube-system
	05ef0f62db0a2       docker.io/marcnuri/yakd@sha256:ef51bed688eb0feab1405f97b7286dfe1da3c61e5a189ce4ae34a90c9f9cf8aa                                              3 minutes ago            Running             yakd                                     0                   96fc102eaeef6       yakd-dashboard-6654c87f9b-b29t5             yakd-dashboard
	5b8bfe2727c13       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago            Running             cloud-spanner-emulator                   0                   ae3b7d3174f99       cloud-spanner-emulator-5bdddb765-jb86j      default
	5a1a1413ec4ac       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   0b444eb95c520       metrics-server-85b7d694d7-dnphb             kube-system
	8d463ddbcc194       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   114e345f99551       registry-6b586f9694-j2n8x                   kube-system
	bebca1e94f189       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   9ab70674944c8       local-path-provisioner-648f6765c9-ld25w     local-path-storage
	0775e7ddec4bd       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   deb2352024842       kube-ingress-dns-minikube                   kube-system
	9de7849d09931       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   46b74af9d5ce9       coredns-66bc5c9577-w88lw                    kube-system
	bbe76e37f22c4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   649155a47504b       storage-provisioner                         kube-system
	483e903265e32       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27                                           3 minutes ago            Running             kindnet-cni                              0                   ce82743c073c2       kindnet-hdbwg                               kube-system
	a51f3eaf36dba       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                                             3 minutes ago            Running             kube-proxy                               0                   416ad8c57c1f1       kube-proxy-7g9j9                            kube-system
	fcf4200f75e68       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                                             3 minutes ago            Running             kube-apiserver                           0                   7ca4f5eb9a73f       kube-apiserver-addons-791857                kube-system
	73473c7fc9bbe       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             3 minutes ago            Running             etcd                                     0                   ec784dde985be       etcd-addons-791857                          kube-system
	e6fd793aa75bd       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                                             3 minutes ago            Running             kube-scheduler                           0                   043acd3d662a9       kube-scheduler-addons-791857                kube-system
	8cc6b1da7c4c1       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                                             3 minutes ago            Running             kube-controller-manager                  0                   241a3a9bbbcf3       kube-controller-manager-addons-791857       kube-system
	
	
	==> coredns [9de7849d09931e8545cb193c0c133e4f48a2478e312f66c751d66daac9199580] <==
	[INFO] 10.244.0.22:39260 - 27058 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00021054s
	[INFO] 10.244.0.22:34545 - 37237 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.006180661s
	[INFO] 10.244.0.22:45092 - 51111 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.008953081s
	[INFO] 10.244.0.22:38130 - 47986 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006741378s
	[INFO] 10.244.0.22:44659 - 21460 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006859595s
	[INFO] 10.244.0.22:45059 - 49278 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00637689s
	[INFO] 10.244.0.22:36221 - 65074 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006525682s
	[INFO] 10.244.0.22:39091 - 13716 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001227658s
	[INFO] 10.244.0.22:44227 - 25729 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002682026s
	[INFO] 10.244.0.28:41377 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000189403s
	[INFO] 10.244.0.28:50734 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000156367s
	[INFO] 10.244.0.30:53772 - 60180 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000187104s
	[INFO] 10.244.0.30:34750 - 17220 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000233144s
	[INFO] 10.244.0.30:40071 - 51050 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000132832s
	[INFO] 10.244.0.30:48034 - 25176 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000186009s
	[INFO] 10.244.0.30:42434 - 8216 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000111613s
	[INFO] 10.244.0.30:60821 - 28485 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000169635s
	[INFO] 10.244.0.30:56563 - 30755 "AAAA IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.007238701s
	[INFO] 10.244.0.30:50555 - 61415 "A IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.00818203s
	[INFO] 10.244.0.30:57610 - 64463 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005227876s
	[INFO] 10.244.0.30:48635 - 27953 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.008832763s
	[INFO] 10.244.0.30:57986 - 1737 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.00410575s
	[INFO] 10.244.0.30:40726 - 27935 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005452751s
	[INFO] 10.244.0.30:59472 - 26230 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001665181s
	[INFO] 10.244.0.30:37168 - 51340 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.002682948s
	
	
	==> describe nodes <==
	Name:               addons-791857
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-791857
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=addons-791857
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T02_25_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-791857
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-791857"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 02:25:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-791857
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 02:29:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 02:29:13 +0000   Fri, 19 Dec 2025 02:25:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 02:29:13 +0000   Fri, 19 Dec 2025 02:25:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 02:29:13 +0000   Fri, 19 Dec 2025 02:25:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 02:29:13 +0000   Fri, 19 Dec 2025 02:25:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-791857
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                75c2a887-6e79-49d2-accf-6fefcc720450
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m43s
	  default                     cloud-spanner-emulator-5bdddb765-jb86j       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m36s
	  default                     hello-world-app-5d498dc89-pjcjr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-j5dvh                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  gcp-auth                    gcp-auth-78565c9fb4-6bmz4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m28s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-qmd9h    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m35s
	  kube-system                 amd-gpu-device-plugin-j2hvw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  kube-system                 coredns-66bc5c9577-w88lw                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m36s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 csi-hostpathplugin-stf22                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  kube-system                 etcd-addons-791857                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m42s
	  kube-system                 kindnet-hdbwg                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m36s
	  kube-system                 kube-apiserver-addons-791857                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 kube-controller-manager-addons-791857        200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 kube-proxy-7g9j9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m36s
	  kube-system                 kube-scheduler-addons-791857                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 metrics-server-85b7d694d7-dnphb              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m35s
	  kube-system                 nvidia-device-plugin-daemonset-9ngs4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  kube-system                 registry-6b586f9694-j2n8x                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 registry-creds-764b6fb674-xdlrg              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 registry-proxy-wsz68                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  kube-system                 snapshot-controller-7d9fbc56b8-d42wc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 snapshot-controller-7d9fbc56b8-v6xz5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  local-path-storage          local-path-provisioner-648f6765c9-ld25w      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  yakd-dashboard              yakd-dashboard-6654c87f9b-b29t5              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     3m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m34s                  kube-proxy       
	  Normal  Starting                 3m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m46s (x8 over 3m47s)  kubelet          Node addons-791857 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m46s (x8 over 3m47s)  kubelet          Node addons-791857 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m46s (x8 over 3m47s)  kubelet          Node addons-791857 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m42s                  kubelet          Node addons-791857 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m42s                  kubelet          Node addons-791857 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m42s                  kubelet          Node addons-791857 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m37s                  node-controller  Node addons-791857 event: Registered Node addons-791857 in Controller
	  Normal  NodeReady                3m23s                  kubelet          Node addons-791857 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.091115] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025741] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.646270] kauditd_printk_skb: 47 callbacks suppressed
	[Dec19 02:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.041250] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.024871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.022884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +8.127187] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[ +16.382230] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[Dec19 02:28] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	
	
	==> etcd [73473c7fc9bbe226d61054cd86b0c64ebb4b011155ac700760284f1b9be79ac4] <==
	{"level":"warn","ts":"2025-12-19T02:25:36.917931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:36.923911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:36.930057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:36.936296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:36.942529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:36.949011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:36.961335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:36.967989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:36.975765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:36.983039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:36.994919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:37.001224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:37.007888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:47.950848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:47.957984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45416","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T02:26:08.976378Z","caller":"traceutil/trace.go:172","msg":"trace[1967062358] transaction","detail":"{read_only:false; response_revision:1001; number_of_response:1; }","duration":"128.79707ms","start":"2025-12-19T02:26:08.847558Z","end":"2025-12-19T02:26:08.976355Z","steps":["trace[1967062358] 'process raft request'  (duration: 110.719115ms)","trace[1967062358] 'compare'  (duration: 17.994964ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T02:26:09.013062Z","caller":"traceutil/trace.go:172","msg":"trace[1349280637] transaction","detail":"{read_only:false; response_revision:1004; number_of_response:1; }","duration":"163.883085ms","start":"2025-12-19T02:26:08.849172Z","end":"2025-12-19T02:26:09.013055Z","steps":["trace[1349280637] 'process raft request'  (duration: 163.833558ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:26:09.013081Z","caller":"traceutil/trace.go:172","msg":"trace[1662932122] transaction","detail":"{read_only:false; response_revision:1003; number_of_response:1; }","duration":"164.094898ms","start":"2025-12-19T02:26:08.848968Z","end":"2025-12-19T02:26:09.013063Z","steps":["trace[1662932122] 'process raft request'  (duration: 164.001974ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:26:09.013043Z","caller":"traceutil/trace.go:172","msg":"trace[218474860] transaction","detail":"{read_only:false; response_revision:1002; number_of_response:1; }","duration":"165.455319ms","start":"2025-12-19T02:26:08.847567Z","end":"2025-12-19T02:26:09.013023Z","steps":["trace[218474860] 'process raft request'  (duration: 165.305184ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:26:14.436321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:26:14.445295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:26:14.471069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:26:14.479054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52152","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T02:26:38.525237Z","caller":"traceutil/trace.go:172","msg":"trace[924737974] transaction","detail":"{read_only:false; response_revision:1239; number_of_response:1; }","duration":"104.853859ms","start":"2025-12-19T02:26:38.420365Z","end":"2025-12-19T02:26:38.525219Z","steps":["trace[924737974] 'process raft request'  (duration: 104.81566ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:26:38.525247Z","caller":"traceutil/trace.go:172","msg":"trace[1045968563] transaction","detail":"{read_only:false; response_revision:1238; number_of_response:1; }","duration":"157.002694ms","start":"2025-12-19T02:26:38.368229Z","end":"2025-12-19T02:26:38.525232Z","steps":["trace[1045968563] 'process raft request'  (duration: 132.360987ms)","trace[1045968563] 'compare'  (duration: 24.476047ms)"],"step_count":2}
	
	
	==> gcp-auth [9ddd01031bdbf0666aae205c610b63776c66380347b086a2704c9e17e86f1d33] <==
	2025/12/19 02:26:35 GCP Auth Webhook started!
	2025/12/19 02:26:38 Ready to marshal response ...
	2025/12/19 02:26:38 Ready to write response ...
	2025/12/19 02:26:38 Ready to marshal response ...
	2025/12/19 02:26:38 Ready to write response ...
	2025/12/19 02:26:38 Ready to marshal response ...
	2025/12/19 02:26:38 Ready to write response ...
	2025/12/19 02:26:48 Ready to marshal response ...
	2025/12/19 02:26:48 Ready to write response ...
	2025/12/19 02:26:48 Ready to marshal response ...
	2025/12/19 02:26:48 Ready to write response ...
	2025/12/19 02:26:55 Ready to marshal response ...
	2025/12/19 02:26:55 Ready to write response ...
	2025/12/19 02:26:56 Ready to marshal response ...
	2025/12/19 02:26:56 Ready to write response ...
	2025/12/19 02:26:58 Ready to marshal response ...
	2025/12/19 02:26:58 Ready to write response ...
	2025/12/19 02:27:01 Ready to marshal response ...
	2025/12/19 02:27:01 Ready to write response ...
	2025/12/19 02:27:18 Ready to marshal response ...
	2025/12/19 02:27:18 Ready to write response ...
	2025/12/19 02:29:19 Ready to marshal response ...
	2025/12/19 02:29:19 Ready to write response ...
	
	
	==> kernel <==
	 02:29:21 up 11 min,  0 user,  load average: 0.35, 0.63, 0.31
	Linux addons-791857 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [483e903265e322c82b7cfeb6c2cd6fdfb8900212d86fc292511093c09ce04d07] <==
	I1219 02:27:18.048815       1 main.go:301] handling current node
	I1219 02:27:28.048696       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:27:28.048749       1 main.go:301] handling current node
	I1219 02:27:38.051300       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:27:38.051335       1 main.go:301] handling current node
	I1219 02:27:48.048564       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:27:48.048591       1 main.go:301] handling current node
	I1219 02:27:58.048736       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:27:58.048777       1 main.go:301] handling current node
	I1219 02:28:08.049277       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:28:08.049307       1 main.go:301] handling current node
	I1219 02:28:18.048744       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:28:18.048777       1 main.go:301] handling current node
	I1219 02:28:28.049135       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:28:28.049188       1 main.go:301] handling current node
	I1219 02:28:38.049649       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:28:38.049685       1 main.go:301] handling current node
	I1219 02:28:48.048858       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:28:48.048905       1 main.go:301] handling current node
	I1219 02:28:58.055093       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:28:58.055165       1 main.go:301] handling current node
	I1219 02:29:08.049344       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:29:08.049392       1 main.go:301] handling current node
	I1219 02:29:18.048560       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:29:18.048603       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fcf4200f75e689843ddc922c6d57acdb381cebd9891fa659a6990df808c09ea5] <==
	W1219 02:26:10.020829       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 02:26:10.021012       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 02:26:10.021062       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1219 02:26:10.021013       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 02:26:10.022204       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 02:26:14.031817       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 02:26:14.031867       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1219 02:26:14.031880       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.203.231:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.203.231:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1219 02:26:14.042313       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 02:26:14.436234       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:26:14.444975       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:26:14.465395       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 02:26:14.478836       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	E1219 02:26:47.617544       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37810: use of closed network connection
	E1219 02:26:47.755212       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37824: use of closed network connection
	I1219 02:26:56.399423       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1219 02:26:56.587566       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.9.89"}
	I1219 02:27:07.287240       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1219 02:29:19.624512       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.13.218"}
	
	
	==> kube-controller-manager [8cc6b1da7c4c1244ed3e67c38c6672283870866dfd11de667780d4667c5ce702] <==
	I1219 02:25:44.419483       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1219 02:25:44.419624       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-791857"
	I1219 02:25:44.419677       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1219 02:25:44.419748       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1219 02:25:44.419810       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1219 02:25:44.420365       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1219 02:25:44.420379       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1219 02:25:44.420366       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1219 02:25:44.420497       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 02:25:44.420589       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1219 02:25:44.420676       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 02:25:44.421605       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1219 02:25:44.421754       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1219 02:25:44.422638       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 02:25:44.423745       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 02:25:44.423746       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 02:25:44.428481       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1219 02:25:44.439216       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1219 02:25:46.549330       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1219 02:25:59.422839       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1219 02:26:14.429133       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1219 02:26:14.429202       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1219 02:26:14.456992       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1219 02:26:14.529985       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 02:26:14.558196       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [a51f3eaf36dba0ef9d223354007cff8f566a886628eed2756e88a76fb84d455f] <==
	I1219 02:25:46.356225       1 server_linux.go:53] "Using iptables proxy"
	I1219 02:25:46.577032       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 02:25:46.679262       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 02:25:46.679305       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1219 02:25:46.679402       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:25:46.757988       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 02:25:46.758058       1 server_linux.go:132] "Using iptables Proxier"
	I1219 02:25:46.765698       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:25:46.772192       1 server.go:527] "Version info" version="v1.34.3"
	I1219 02:25:46.772371       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:25:46.780273       1 config.go:200] "Starting service config controller"
	I1219 02:25:46.780293       1 config.go:309] "Starting node config controller"
	I1219 02:25:46.780306       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:25:46.780308       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:25:46.780314       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 02:25:46.780295       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:25:46.780327       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:25:46.780334       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:25:46.780316       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:25:46.880516       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 02:25:46.880515       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 02:25:46.880582       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e6fd793aa75bd52aacacfead162878e4b1933f59937ea6ddfc7b0e7c0084361a] <==
	E1219 02:25:37.435178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 02:25:37.435270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 02:25:37.435304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 02:25:37.435348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1219 02:25:37.435377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1219 02:25:37.435380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 02:25:37.435402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1219 02:25:37.435452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 02:25:37.435507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1219 02:25:37.435524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 02:25:37.435534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1219 02:25:37.435597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1219 02:25:37.435688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 02:25:37.435731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 02:25:37.435794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 02:25:38.241559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 02:25:38.306778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1219 02:25:38.323967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 02:25:38.355388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 02:25:38.436128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 02:25:38.453364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 02:25:38.517460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 02:25:38.523549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 02:25:38.529692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1219 02:25:39.027501       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 02:27:25 addons-791857 kubelet[1285]: I1219 02:27:25.843063    1285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7f7fb57-35df-4cb6-a8cd-c4f2898685ca-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "f7f7fb57-35df-4cb6-a8cd-c4f2898685ca" (UID: "f7f7fb57-35df-4cb6-a8cd-c4f2898685ca"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 19 02:27:25 addons-791857 kubelet[1285]: I1219 02:27:25.844992    1285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7f7fb57-35df-4cb6-a8cd-c4f2898685ca-kube-api-access-tpm4g" (OuterVolumeSpecName: "kube-api-access-tpm4g") pod "f7f7fb57-35df-4cb6-a8cd-c4f2898685ca" (UID: "f7f7fb57-35df-4cb6-a8cd-c4f2898685ca"). InnerVolumeSpecName "kube-api-access-tpm4g". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 19 02:27:25 addons-791857 kubelet[1285]: I1219 02:27:25.845971    1285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^3ce77304-dc82-11f0-9106-2e8309b65a10" (OuterVolumeSpecName: "task-pv-storage") pod "f7f7fb57-35df-4cb6-a8cd-c4f2898685ca" (UID: "f7f7fb57-35df-4cb6-a8cd-c4f2898685ca"). InnerVolumeSpecName "pvc-17d44ab3-a7e9-43f8-822f-550e936fd654". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 19 02:27:25 addons-791857 kubelet[1285]: I1219 02:27:25.944201    1285 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tpm4g\" (UniqueName: \"kubernetes.io/projected/f7f7fb57-35df-4cb6-a8cd-c4f2898685ca-kube-api-access-tpm4g\") on node \"addons-791857\" DevicePath \"\""
	Dec 19 02:27:25 addons-791857 kubelet[1285]: I1219 02:27:25.944264    1285 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-17d44ab3-a7e9-43f8-822f-550e936fd654\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^3ce77304-dc82-11f0-9106-2e8309b65a10\") on node \"addons-791857\" "
	Dec 19 02:27:25 addons-791857 kubelet[1285]: I1219 02:27:25.944279    1285 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f7f7fb57-35df-4cb6-a8cd-c4f2898685ca-gcp-creds\") on node \"addons-791857\" DevicePath \"\""
	Dec 19 02:27:25 addons-791857 kubelet[1285]: I1219 02:27:25.949436    1285 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-17d44ab3-a7e9-43f8-822f-550e936fd654" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^3ce77304-dc82-11f0-9106-2e8309b65a10") on node "addons-791857"
	Dec 19 02:27:26 addons-791857 kubelet[1285]: I1219 02:27:26.045031    1285 reconciler_common.go:299] "Volume detached for volume \"pvc-17d44ab3-a7e9-43f8-822f-550e936fd654\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^3ce77304-dc82-11f0-9106-2e8309b65a10\") on node \"addons-791857\" DevicePath \"\""
	Dec 19 02:27:26 addons-791857 kubelet[1285]: I1219 02:27:26.135854    1285 scope.go:117] "RemoveContainer" containerID="5bcbda254234b7212fca638e1e2875fbc31a656518feecd0b57fde2ea4c3be99"
	Dec 19 02:27:26 addons-791857 kubelet[1285]: I1219 02:27:26.146340    1285 scope.go:117] "RemoveContainer" containerID="5bcbda254234b7212fca638e1e2875fbc31a656518feecd0b57fde2ea4c3be99"
	Dec 19 02:27:26 addons-791857 kubelet[1285]: E1219 02:27:26.146847    1285 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bcbda254234b7212fca638e1e2875fbc31a656518feecd0b57fde2ea4c3be99\": container with ID starting with 5bcbda254234b7212fca638e1e2875fbc31a656518feecd0b57fde2ea4c3be99 not found: ID does not exist" containerID="5bcbda254234b7212fca638e1e2875fbc31a656518feecd0b57fde2ea4c3be99"
	Dec 19 02:27:26 addons-791857 kubelet[1285]: I1219 02:27:26.146893    1285 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bcbda254234b7212fca638e1e2875fbc31a656518feecd0b57fde2ea4c3be99"} err="failed to get container status \"5bcbda254234b7212fca638e1e2875fbc31a656518feecd0b57fde2ea4c3be99\": rpc error: code = NotFound desc = could not find container \"5bcbda254234b7212fca638e1e2875fbc31a656518feecd0b57fde2ea4c3be99\": container with ID starting with 5bcbda254234b7212fca638e1e2875fbc31a656518feecd0b57fde2ea4c3be99 not found: ID does not exist"
	Dec 19 02:27:27 addons-791857 kubelet[1285]: I1219 02:27:27.616178    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7f7fb57-35df-4cb6-a8cd-c4f2898685ca" path="/var/lib/kubelet/pods/f7f7fb57-35df-4cb6-a8cd-c4f2898685ca/volumes"
	Dec 19 02:27:39 addons-791857 kubelet[1285]: I1219 02:27:39.603330    1285 scope.go:117] "RemoveContainer" containerID="f75adaa44af08a6ec344f7a4d7dc1dcc15acf5e78fb8efb97e7d2cb0448fee74"
	Dec 19 02:27:39 addons-791857 kubelet[1285]: I1219 02:27:39.611331    1285 scope.go:117] "RemoveContainer" containerID="3d53a5d1fdc9e81afde733cb540f02d2774dd47e18e986fbae0bc07f90757227"
	Dec 19 02:27:39 addons-791857 kubelet[1285]: I1219 02:27:39.619549    1285 scope.go:117] "RemoveContainer" containerID="c136b6a814e1362132d7fb60eeceebf5c10acf1f8e9a089b897cc29f0d64f3eb"
	Dec 19 02:27:39 addons-791857 kubelet[1285]: I1219 02:27:39.626888    1285 scope.go:117] "RemoveContainer" containerID="26a9c075fa6562fb66985c96b59e50ccf7bf669843fc9db633033d98f8f10b0d"
	Dec 19 02:27:40 addons-791857 kubelet[1285]: I1219 02:27:40.613186    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-9ngs4" secret="" err="secret \"gcp-auth\" not found"
	Dec 19 02:27:53 addons-791857 kubelet[1285]: I1219 02:27:53.612925    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-wsz68" secret="" err="secret \"gcp-auth\" not found"
	Dec 19 02:28:45 addons-791857 kubelet[1285]: I1219 02:28:45.613279    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-9ngs4" secret="" err="secret \"gcp-auth\" not found"
	Dec 19 02:28:51 addons-791857 kubelet[1285]: I1219 02:28:51.613337    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-j2hvw" secret="" err="secret \"gcp-auth\" not found"
	Dec 19 02:29:01 addons-791857 kubelet[1285]: I1219 02:29:01.613197    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-wsz68" secret="" err="secret \"gcp-auth\" not found"
	Dec 19 02:29:19 addons-791857 kubelet[1285]: I1219 02:29:19.660932    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbxwj\" (UniqueName: \"kubernetes.io/projected/df53b82f-8343-4d95-96b2-bba6866822cb-kube-api-access-rbxwj\") pod \"hello-world-app-5d498dc89-pjcjr\" (UID: \"df53b82f-8343-4d95-96b2-bba6866822cb\") " pod="default/hello-world-app-5d498dc89-pjcjr"
	Dec 19 02:29:19 addons-791857 kubelet[1285]: I1219 02:29:19.661026    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/df53b82f-8343-4d95-96b2-bba6866822cb-gcp-creds\") pod \"hello-world-app-5d498dc89-pjcjr\" (UID: \"df53b82f-8343-4d95-96b2-bba6866822cb\") " pod="default/hello-world-app-5d498dc89-pjcjr"
	Dec 19 02:29:20 addons-791857 kubelet[1285]: I1219 02:29:20.570092    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-pjcjr" podStartSLOduration=1.175214973 podStartE2EDuration="1.570071325s" podCreationTimestamp="2025-12-19 02:29:19 +0000 UTC" firstStartedPulling="2025-12-19 02:29:19.886550704 +0000 UTC m=+220.356291280" lastFinishedPulling="2025-12-19 02:29:20.281407048 +0000 UTC m=+220.751147632" observedRunningTime="2025-12-19 02:29:20.568925935 +0000 UTC m=+221.038666532" watchObservedRunningTime="2025-12-19 02:29:20.570071325 +0000 UTC m=+221.039811922"
	
	
	==> storage-provisioner [bbe76e37f22c483601a1b3e1967ff9fb850315576f8e7e47a90c7ed1bab593f3] <==
	W1219 02:28:55.619122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:28:57.622181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:28:57.626893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:28:59.629958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:28:59.634037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:29:01.636725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:29:01.640304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:29:03.643314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:29:03.647049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:29:05.649715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:29:05.654749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:29:07.657297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:29:07.660828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:29:09.664414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:29:09.670042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:29:11.672973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:29:11.676770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:29:13.679523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:29:13.683265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:29:15.686800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:29:15.690655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:29:17.693323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:29:17.696697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:29:19.700074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:29:19.703567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-791857 -n addons-791857
helpers_test.go:270: (dbg) Run:  kubectl --context addons-791857 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-l2d6q ingress-nginx-admission-patch-kl62v
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-791857 describe pod ingress-nginx-admission-create-l2d6q ingress-nginx-admission-patch-kl62v
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-791857 describe pod ingress-nginx-admission-create-l2d6q ingress-nginx-admission-patch-kl62v: exit status 1 (55.01328ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-l2d6q" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kl62v" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-791857 describe pod ingress-nginx-admission-create-l2d6q ingress-nginx-admission-patch-kl62v: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-791857 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-791857 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (250.043095ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:29:22.132619   24525 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:29:22.132925   24525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:29:22.132940   24525 out.go:374] Setting ErrFile to fd 2...
	I1219 02:29:22.132945   24525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:29:22.133191   24525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:29:22.133497   24525 mustload.go:66] Loading cluster: addons-791857
	I1219 02:29:22.133844   24525 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:29:22.133866   24525 addons.go:638] checking whether the cluster is paused
	I1219 02:29:22.133969   24525 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:29:22.133982   24525 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:29:22.134387   24525 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:29:22.152194   24525 ssh_runner.go:195] Run: systemctl --version
	I1219 02:29:22.152322   24525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:29:22.170562   24525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:29:22.271323   24525 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 02:29:22.271430   24525 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 02:29:22.300440   24525 cri.go:92] found id: "82083cc2b8ec2b9f35c8877a2b88e8140201a847ce5fcc112fb8edde1bd778a9"
	I1219 02:29:22.300462   24525 cri.go:92] found id: "e7ab741310c7159227dc4626e72a83ac3b188e511b198b4eecbe1fbfa72a6a1d"
	I1219 02:29:22.300466   24525 cri.go:92] found id: "96a4c77bc9411da812fbbdb05f10e7f4626e9117b182160f7f703c6ca69da438"
	I1219 02:29:22.300470   24525 cri.go:92] found id: "3b4d17ba4256243dc128f083e9b674110dcf80d83f680893f8d1560a63f34ce8"
	I1219 02:29:22.300472   24525 cri.go:92] found id: "3da71007b4d24d06267e77306cc089ffb24dd26395c3cf25a087bebddec346df"
	I1219 02:29:22.300485   24525 cri.go:92] found id: "464d91d87ed3b2836ea8bd02787437d0615fd0b3715165642f0adfcf6734796a"
	I1219 02:29:22.300488   24525 cri.go:92] found id: "3952448da55ae48b025692fb6527671c951ceba9c3201a0cfc26c58f34736160"
	I1219 02:29:22.300491   24525 cri.go:92] found id: "1bb6f09e7568eb35fc4a1fa4554a64e2a8a2dc5103e913d143cf2351ffea0912"
	I1219 02:29:22.300493   24525 cri.go:92] found id: "889116c0a9d403ff95107f15b2cb4bc324e04429f5963a9814e1ead83b3ec0d5"
	I1219 02:29:22.300505   24525 cri.go:92] found id: "258da604e725d4407e60a6c8389099d1393afa4e245dedbe89d9ec3b45b908c6"
	I1219 02:29:22.300510   24525 cri.go:92] found id: "0576bfa9d823e389e837b502601bc350a20a0ef1b326b80a5a3bdff07282f675"
	I1219 02:29:22.300513   24525 cri.go:92] found id: "97ca5f9b244b723133d2ee7e9cf88c9ff3e0669e3f5144dc9d65f11e422589b0"
	I1219 02:29:22.300516   24525 cri.go:92] found id: "6784d80b9a46542b3e645c6e8d4b04def8668221cabc6ecdcfad0991c2fec046"
	I1219 02:29:22.300519   24525 cri.go:92] found id: "88c063c48a86c4d6b60c5e780eedfbcaaece7ef28eef4a4eed50297457b2f0cd"
	I1219 02:29:22.300525   24525 cri.go:92] found id: "5a1a1413ec4ac23fec4e18501c1dae0c954c686f19b103ea385a88927ea6b4dc"
	I1219 02:29:22.300529   24525 cri.go:92] found id: "8d463ddbcc194b758faa79280521c0dc3091a84ae8b1daed1462125c0f30a12d"
	I1219 02:29:22.300532   24525 cri.go:92] found id: "0775e7ddec4bddf3bf1f8c6146bfa87a31eab965b4cd4b9cdcb81072b3c193df"
	I1219 02:29:22.300536   24525 cri.go:92] found id: "9de7849d09931e8545cb193c0c133e4f48a2478e312f66c751d66daac9199580"
	I1219 02:29:22.300539   24525 cri.go:92] found id: "bbe76e37f22c483601a1b3e1967ff9fb850315576f8e7e47a90c7ed1bab593f3"
	I1219 02:29:22.300545   24525 cri.go:92] found id: "483e903265e322c82b7cfeb6c2cd6fdfb8900212d86fc292511093c09ce04d07"
	I1219 02:29:22.300548   24525 cri.go:92] found id: "a51f3eaf36dba0ef9d223354007cff8f566a886628eed2756e88a76fb84d455f"
	I1219 02:29:22.300551   24525 cri.go:92] found id: "fcf4200f75e689843ddc922c6d57acdb381cebd9891fa659a6990df808c09ea5"
	I1219 02:29:22.300553   24525 cri.go:92] found id: "73473c7fc9bbe226d61054cd86b0c64ebb4b011155ac700760284f1b9be79ac4"
	I1219 02:29:22.300556   24525 cri.go:92] found id: "e6fd793aa75bd52aacacfead162878e4b1933f59937ea6ddfc7b0e7c0084361a"
	I1219 02:29:22.300559   24525 cri.go:92] found id: "8cc6b1da7c4c1244ed3e67c38c6672283870866dfd11de667780d4667c5ce702"
	I1219 02:29:22.300564   24525 cri.go:92] found id: ""
	I1219 02:29:22.300605   24525 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 02:29:22.314516   24525 out.go:203] 
	W1219 02:29:22.315749   24525 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:29:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:29:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 02:29:22.315769   24525 out.go:285] * 
	* 
	W1219 02:29:22.318818   24525 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 02:29:22.320223   24525 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-791857 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-791857 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-791857 addons disable ingress --alsologtostderr -v=1: exit status 11 (250.539277ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:29:22.377583   24587 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:29:22.377893   24587 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:29:22.377905   24587 out.go:374] Setting ErrFile to fd 2...
	I1219 02:29:22.377911   24587 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:29:22.378094   24587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:29:22.378357   24587 mustload.go:66] Loading cluster: addons-791857
	I1219 02:29:22.378669   24587 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:29:22.378692   24587 addons.go:638] checking whether the cluster is paused
	I1219 02:29:22.378806   24587 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:29:22.378822   24587 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:29:22.379171   24587 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:29:22.398936   24587 ssh_runner.go:195] Run: systemctl --version
	I1219 02:29:22.398989   24587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:29:22.416726   24587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:29:22.516924   24587 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 02:29:22.517036   24587 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 02:29:22.547839   24587 cri.go:92] found id: "82083cc2b8ec2b9f35c8877a2b88e8140201a847ce5fcc112fb8edde1bd778a9"
	I1219 02:29:22.547858   24587 cri.go:92] found id: "e7ab741310c7159227dc4626e72a83ac3b188e511b198b4eecbe1fbfa72a6a1d"
	I1219 02:29:22.547862   24587 cri.go:92] found id: "96a4c77bc9411da812fbbdb05f10e7f4626e9117b182160f7f703c6ca69da438"
	I1219 02:29:22.547865   24587 cri.go:92] found id: "3b4d17ba4256243dc128f083e9b674110dcf80d83f680893f8d1560a63f34ce8"
	I1219 02:29:22.547869   24587 cri.go:92] found id: "3da71007b4d24d06267e77306cc089ffb24dd26395c3cf25a087bebddec346df"
	I1219 02:29:22.547872   24587 cri.go:92] found id: "464d91d87ed3b2836ea8bd02787437d0615fd0b3715165642f0adfcf6734796a"
	I1219 02:29:22.547875   24587 cri.go:92] found id: "3952448da55ae48b025692fb6527671c951ceba9c3201a0cfc26c58f34736160"
	I1219 02:29:22.547877   24587 cri.go:92] found id: "1bb6f09e7568eb35fc4a1fa4554a64e2a8a2dc5103e913d143cf2351ffea0912"
	I1219 02:29:22.547880   24587 cri.go:92] found id: "889116c0a9d403ff95107f15b2cb4bc324e04429f5963a9814e1ead83b3ec0d5"
	I1219 02:29:22.547912   24587 cri.go:92] found id: "258da604e725d4407e60a6c8389099d1393afa4e245dedbe89d9ec3b45b908c6"
	I1219 02:29:22.547918   24587 cri.go:92] found id: "0576bfa9d823e389e837b502601bc350a20a0ef1b326b80a5a3bdff07282f675"
	I1219 02:29:22.547921   24587 cri.go:92] found id: "97ca5f9b244b723133d2ee7e9cf88c9ff3e0669e3f5144dc9d65f11e422589b0"
	I1219 02:29:22.547924   24587 cri.go:92] found id: "6784d80b9a46542b3e645c6e8d4b04def8668221cabc6ecdcfad0991c2fec046"
	I1219 02:29:22.547927   24587 cri.go:92] found id: "88c063c48a86c4d6b60c5e780eedfbcaaece7ef28eef4a4eed50297457b2f0cd"
	I1219 02:29:22.547930   24587 cri.go:92] found id: "5a1a1413ec4ac23fec4e18501c1dae0c954c686f19b103ea385a88927ea6b4dc"
	I1219 02:29:22.547941   24587 cri.go:92] found id: "8d463ddbcc194b758faa79280521c0dc3091a84ae8b1daed1462125c0f30a12d"
	I1219 02:29:22.547946   24587 cri.go:92] found id: "0775e7ddec4bddf3bf1f8c6146bfa87a31eab965b4cd4b9cdcb81072b3c193df"
	I1219 02:29:22.547950   24587 cri.go:92] found id: "9de7849d09931e8545cb193c0c133e4f48a2478e312f66c751d66daac9199580"
	I1219 02:29:22.547953   24587 cri.go:92] found id: "bbe76e37f22c483601a1b3e1967ff9fb850315576f8e7e47a90c7ed1bab593f3"
	I1219 02:29:22.547956   24587 cri.go:92] found id: "483e903265e322c82b7cfeb6c2cd6fdfb8900212d86fc292511093c09ce04d07"
	I1219 02:29:22.547959   24587 cri.go:92] found id: "a51f3eaf36dba0ef9d223354007cff8f566a886628eed2756e88a76fb84d455f"
	I1219 02:29:22.547962   24587 cri.go:92] found id: "fcf4200f75e689843ddc922c6d57acdb381cebd9891fa659a6990df808c09ea5"
	I1219 02:29:22.547965   24587 cri.go:92] found id: "73473c7fc9bbe226d61054cd86b0c64ebb4b011155ac700760284f1b9be79ac4"
	I1219 02:29:22.547971   24587 cri.go:92] found id: "e6fd793aa75bd52aacacfead162878e4b1933f59937ea6ddfc7b0e7c0084361a"
	I1219 02:29:22.547974   24587 cri.go:92] found id: "8cc6b1da7c4c1244ed3e67c38c6672283870866dfd11de667780d4667c5ce702"
	I1219 02:29:22.547977   24587 cri.go:92] found id: ""
	I1219 02:29:22.548014   24587 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 02:29:22.563069   24587 out.go:203] 
	W1219 02:29:22.564517   24587 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:29:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:29:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 02:29:22.564556   24587 out.go:285] * 
	* 
	W1219 02:29:22.569071   24587 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 02:29:22.570550   24587 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-791857 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.42s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-j5dvh" [ffd0103a-ebcc-4aa0-8b8d-7227394c9d34] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00397994s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-791857 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-791857 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (244.035801ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:27:03.648052   21495 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:27:03.648304   21495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:27:03.648312   21495 out.go:374] Setting ErrFile to fd 2...
	I1219 02:27:03.648317   21495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:27:03.648506   21495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:27:03.648777   21495 mustload.go:66] Loading cluster: addons-791857
	I1219 02:27:03.649068   21495 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:27:03.649085   21495 addons.go:638] checking whether the cluster is paused
	I1219 02:27:03.649160   21495 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:27:03.649171   21495 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:27:03.649501   21495 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:27:03.667521   21495 ssh_runner.go:195] Run: systemctl --version
	I1219 02:27:03.667573   21495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:27:03.686026   21495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:27:03.786772   21495 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 02:27:03.786869   21495 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 02:27:03.815169   21495 cri.go:92] found id: "82083cc2b8ec2b9f35c8877a2b88e8140201a847ce5fcc112fb8edde1bd778a9"
	I1219 02:27:03.815195   21495 cri.go:92] found id: "e7ab741310c7159227dc4626e72a83ac3b188e511b198b4eecbe1fbfa72a6a1d"
	I1219 02:27:03.815200   21495 cri.go:92] found id: "96a4c77bc9411da812fbbdb05f10e7f4626e9117b182160f7f703c6ca69da438"
	I1219 02:27:03.815204   21495 cri.go:92] found id: "3b4d17ba4256243dc128f083e9b674110dcf80d83f680893f8d1560a63f34ce8"
	I1219 02:27:03.815207   21495 cri.go:92] found id: "3da71007b4d24d06267e77306cc089ffb24dd26395c3cf25a087bebddec346df"
	I1219 02:27:03.815211   21495 cri.go:92] found id: "464d91d87ed3b2836ea8bd02787437d0615fd0b3715165642f0adfcf6734796a"
	I1219 02:27:03.815214   21495 cri.go:92] found id: "3952448da55ae48b025692fb6527671c951ceba9c3201a0cfc26c58f34736160"
	I1219 02:27:03.815217   21495 cri.go:92] found id: "1bb6f09e7568eb35fc4a1fa4554a64e2a8a2dc5103e913d143cf2351ffea0912"
	I1219 02:27:03.815221   21495 cri.go:92] found id: "889116c0a9d403ff95107f15b2cb4bc324e04429f5963a9814e1ead83b3ec0d5"
	I1219 02:27:03.815228   21495 cri.go:92] found id: "258da604e725d4407e60a6c8389099d1393afa4e245dedbe89d9ec3b45b908c6"
	I1219 02:27:03.815236   21495 cri.go:92] found id: "0576bfa9d823e389e837b502601bc350a20a0ef1b326b80a5a3bdff07282f675"
	I1219 02:27:03.815241   21495 cri.go:92] found id: "97ca5f9b244b723133d2ee7e9cf88c9ff3e0669e3f5144dc9d65f11e422589b0"
	I1219 02:27:03.815251   21495 cri.go:92] found id: "6784d80b9a46542b3e645c6e8d4b04def8668221cabc6ecdcfad0991c2fec046"
	I1219 02:27:03.815256   21495 cri.go:92] found id: "88c063c48a86c4d6b60c5e780eedfbcaaece7ef28eef4a4eed50297457b2f0cd"
	I1219 02:27:03.815261   21495 cri.go:92] found id: "5a1a1413ec4ac23fec4e18501c1dae0c954c686f19b103ea385a88927ea6b4dc"
	I1219 02:27:03.815267   21495 cri.go:92] found id: "8d463ddbcc194b758faa79280521c0dc3091a84ae8b1daed1462125c0f30a12d"
	I1219 02:27:03.815271   21495 cri.go:92] found id: "0775e7ddec4bddf3bf1f8c6146bfa87a31eab965b4cd4b9cdcb81072b3c193df"
	I1219 02:27:03.815284   21495 cri.go:92] found id: "9de7849d09931e8545cb193c0c133e4f48a2478e312f66c751d66daac9199580"
	I1219 02:27:03.815287   21495 cri.go:92] found id: "bbe76e37f22c483601a1b3e1967ff9fb850315576f8e7e47a90c7ed1bab593f3"
	I1219 02:27:03.815290   21495 cri.go:92] found id: "483e903265e322c82b7cfeb6c2cd6fdfb8900212d86fc292511093c09ce04d07"
	I1219 02:27:03.815292   21495 cri.go:92] found id: "a51f3eaf36dba0ef9d223354007cff8f566a886628eed2756e88a76fb84d455f"
	I1219 02:27:03.815295   21495 cri.go:92] found id: "fcf4200f75e689843ddc922c6d57acdb381cebd9891fa659a6990df808c09ea5"
	I1219 02:27:03.815297   21495 cri.go:92] found id: "73473c7fc9bbe226d61054cd86b0c64ebb4b011155ac700760284f1b9be79ac4"
	I1219 02:27:03.815300   21495 cri.go:92] found id: "e6fd793aa75bd52aacacfead162878e4b1933f59937ea6ddfc7b0e7c0084361a"
	I1219 02:27:03.815303   21495 cri.go:92] found id: "8cc6b1da7c4c1244ed3e67c38c6672283870866dfd11de667780d4667c5ce702"
	I1219 02:27:03.815305   21495 cri.go:92] found id: ""
	I1219 02:27:03.815351   21495 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 02:27:03.829243   21495 out.go:203] 
	W1219 02:27:03.830447   21495 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:27:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:27:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 02:27:03.830470   21495 out.go:285] * 
	* 
	W1219 02:27:03.833571   21495 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 02:27:03.834757   21495 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-791857 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 2.718618ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-dnphb" [4e7c7cf4-66fc-44e5-b383-c17b49201f15] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003533772s
addons_test.go:465: (dbg) Run:  kubectl --context addons-791857 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-791857 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-791857 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (250.712029ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:26:58.396311   20630 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:26:58.396503   20630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:26:58.396516   20630 out.go:374] Setting ErrFile to fd 2...
	I1219 02:26:58.396520   20630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:26:58.396725   20630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:26:58.396988   20630 mustload.go:66] Loading cluster: addons-791857
	I1219 02:26:58.397312   20630 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:26:58.397331   20630 addons.go:638] checking whether the cluster is paused
	I1219 02:26:58.397413   20630 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:26:58.397426   20630 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:26:58.397803   20630 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:26:58.415079   20630 ssh_runner.go:195] Run: systemctl --version
	I1219 02:26:58.415143   20630 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:26:58.433064   20630 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:26:58.534378   20630 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 02:26:58.534458   20630 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 02:26:58.565776   20630 cri.go:92] found id: "e7ab741310c7159227dc4626e72a83ac3b188e511b198b4eecbe1fbfa72a6a1d"
	I1219 02:26:58.565802   20630 cri.go:92] found id: "96a4c77bc9411da812fbbdb05f10e7f4626e9117b182160f7f703c6ca69da438"
	I1219 02:26:58.565809   20630 cri.go:92] found id: "3b4d17ba4256243dc128f083e9b674110dcf80d83f680893f8d1560a63f34ce8"
	I1219 02:26:58.565814   20630 cri.go:92] found id: "3da71007b4d24d06267e77306cc089ffb24dd26395c3cf25a087bebddec346df"
	I1219 02:26:58.565819   20630 cri.go:92] found id: "464d91d87ed3b2836ea8bd02787437d0615fd0b3715165642f0adfcf6734796a"
	I1219 02:26:58.565823   20630 cri.go:92] found id: "3952448da55ae48b025692fb6527671c951ceba9c3201a0cfc26c58f34736160"
	I1219 02:26:58.565827   20630 cri.go:92] found id: "1bb6f09e7568eb35fc4a1fa4554a64e2a8a2dc5103e913d143cf2351ffea0912"
	I1219 02:26:58.565831   20630 cri.go:92] found id: "889116c0a9d403ff95107f15b2cb4bc324e04429f5963a9814e1ead83b3ec0d5"
	I1219 02:26:58.565834   20630 cri.go:92] found id: "258da604e725d4407e60a6c8389099d1393afa4e245dedbe89d9ec3b45b908c6"
	I1219 02:26:58.565841   20630 cri.go:92] found id: "0576bfa9d823e389e837b502601bc350a20a0ef1b326b80a5a3bdff07282f675"
	I1219 02:26:58.565846   20630 cri.go:92] found id: "97ca5f9b244b723133d2ee7e9cf88c9ff3e0669e3f5144dc9d65f11e422589b0"
	I1219 02:26:58.565850   20630 cri.go:92] found id: "6784d80b9a46542b3e645c6e8d4b04def8668221cabc6ecdcfad0991c2fec046"
	I1219 02:26:58.565855   20630 cri.go:92] found id: "88c063c48a86c4d6b60c5e780eedfbcaaece7ef28eef4a4eed50297457b2f0cd"
	I1219 02:26:58.565859   20630 cri.go:92] found id: "5a1a1413ec4ac23fec4e18501c1dae0c954c686f19b103ea385a88927ea6b4dc"
	I1219 02:26:58.565865   20630 cri.go:92] found id: "8d463ddbcc194b758faa79280521c0dc3091a84ae8b1daed1462125c0f30a12d"
	I1219 02:26:58.565884   20630 cri.go:92] found id: "0775e7ddec4bddf3bf1f8c6146bfa87a31eab965b4cd4b9cdcb81072b3c193df"
	I1219 02:26:58.565895   20630 cri.go:92] found id: "9de7849d09931e8545cb193c0c133e4f48a2478e312f66c751d66daac9199580"
	I1219 02:26:58.565902   20630 cri.go:92] found id: "bbe76e37f22c483601a1b3e1967ff9fb850315576f8e7e47a90c7ed1bab593f3"
	I1219 02:26:58.565910   20630 cri.go:92] found id: "483e903265e322c82b7cfeb6c2cd6fdfb8900212d86fc292511093c09ce04d07"
	I1219 02:26:58.565915   20630 cri.go:92] found id: "a51f3eaf36dba0ef9d223354007cff8f566a886628eed2756e88a76fb84d455f"
	I1219 02:26:58.565927   20630 cri.go:92] found id: "fcf4200f75e689843ddc922c6d57acdb381cebd9891fa659a6990df808c09ea5"
	I1219 02:26:58.565935   20630 cri.go:92] found id: "73473c7fc9bbe226d61054cd86b0c64ebb4b011155ac700760284f1b9be79ac4"
	I1219 02:26:58.565940   20630 cri.go:92] found id: "e6fd793aa75bd52aacacfead162878e4b1933f59937ea6ddfc7b0e7c0084361a"
	I1219 02:26:58.565946   20630 cri.go:92] found id: "8cc6b1da7c4c1244ed3e67c38c6672283870866dfd11de667780d4667c5ce702"
	I1219 02:26:58.565951   20630 cri.go:92] found id: ""
	I1219 02:26:58.566001   20630 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 02:26:58.579772   20630 out.go:203] 
	W1219 02:26:58.581048   20630 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:26:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:26:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 02:26:58.581078   20630 out.go:285] * 
	* 
	W1219 02:26:58.583996   20630 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 02:26:58.585286   20630 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-791857 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (31.09s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1219 02:26:55.896020    8536 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1219 02:26:55.899238    8536 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1219 02:26:55.899264    8536 kapi.go:107] duration metric: took 3.257118ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.268606ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-791857 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-791857 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-791857 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-791857 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-791857 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-791857 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-791857 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-791857 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [01148dcc-f109-422f-8ec0-2a96ace49d90] Pending
2025/12/19 02:27:01 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:353: "task-pv-pod" [01148dcc-f109-422f-8ec0-2a96ace49d90] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 6.003500753s
addons_test.go:574: (dbg) Run:  kubectl --context addons-791857 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-791857 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-791857 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-791857 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-791857 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-791857 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-791857 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-791857 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-791857 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-791857 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-791857 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-791857 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-791857 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-791857 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-791857 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-791857 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-791857 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [f7f7fb57-35df-4cb6-a8cd-c4f2898685ca] Pending
helpers_test.go:353: "task-pv-pod-restore" [f7f7fb57-35df-4cb6-a8cd-c4f2898685ca] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00383227s
addons_test.go:616: (dbg) Run:  kubectl --context addons-791857 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-791857 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-791857 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-791857 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-791857 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (250.26236ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:27:26.534950   22297 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:27:26.535239   22297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:27:26.535250   22297 out.go:374] Setting ErrFile to fd 2...
	I1219 02:27:26.535254   22297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:27:26.535496   22297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:27:26.535829   22297 mustload.go:66] Loading cluster: addons-791857
	I1219 02:27:26.536198   22297 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:27:26.536219   22297 addons.go:638] checking whether the cluster is paused
	I1219 02:27:26.536318   22297 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:27:26.536335   22297 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:27:26.536732   22297 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:27:26.555654   22297 ssh_runner.go:195] Run: systemctl --version
	I1219 02:27:26.555736   22297 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:27:26.574070   22297 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:27:26.675463   22297 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 02:27:26.675555   22297 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 02:27:26.704202   22297 cri.go:92] found id: "82083cc2b8ec2b9f35c8877a2b88e8140201a847ce5fcc112fb8edde1bd778a9"
	I1219 02:27:26.704227   22297 cri.go:92] found id: "e7ab741310c7159227dc4626e72a83ac3b188e511b198b4eecbe1fbfa72a6a1d"
	I1219 02:27:26.704231   22297 cri.go:92] found id: "96a4c77bc9411da812fbbdb05f10e7f4626e9117b182160f7f703c6ca69da438"
	I1219 02:27:26.704236   22297 cri.go:92] found id: "3b4d17ba4256243dc128f083e9b674110dcf80d83f680893f8d1560a63f34ce8"
	I1219 02:27:26.704239   22297 cri.go:92] found id: "3da71007b4d24d06267e77306cc089ffb24dd26395c3cf25a087bebddec346df"
	I1219 02:27:26.704242   22297 cri.go:92] found id: "464d91d87ed3b2836ea8bd02787437d0615fd0b3715165642f0adfcf6734796a"
	I1219 02:27:26.704245   22297 cri.go:92] found id: "3952448da55ae48b025692fb6527671c951ceba9c3201a0cfc26c58f34736160"
	I1219 02:27:26.704248   22297 cri.go:92] found id: "1bb6f09e7568eb35fc4a1fa4554a64e2a8a2dc5103e913d143cf2351ffea0912"
	I1219 02:27:26.704250   22297 cri.go:92] found id: "889116c0a9d403ff95107f15b2cb4bc324e04429f5963a9814e1ead83b3ec0d5"
	I1219 02:27:26.704255   22297 cri.go:92] found id: "258da604e725d4407e60a6c8389099d1393afa4e245dedbe89d9ec3b45b908c6"
	I1219 02:27:26.704258   22297 cri.go:92] found id: "0576bfa9d823e389e837b502601bc350a20a0ef1b326b80a5a3bdff07282f675"
	I1219 02:27:26.704261   22297 cri.go:92] found id: "97ca5f9b244b723133d2ee7e9cf88c9ff3e0669e3f5144dc9d65f11e422589b0"
	I1219 02:27:26.704271   22297 cri.go:92] found id: "6784d80b9a46542b3e645c6e8d4b04def8668221cabc6ecdcfad0991c2fec046"
	I1219 02:27:26.704273   22297 cri.go:92] found id: "88c063c48a86c4d6b60c5e780eedfbcaaece7ef28eef4a4eed50297457b2f0cd"
	I1219 02:27:26.704276   22297 cri.go:92] found id: "5a1a1413ec4ac23fec4e18501c1dae0c954c686f19b103ea385a88927ea6b4dc"
	I1219 02:27:26.704284   22297 cri.go:92] found id: "8d463ddbcc194b758faa79280521c0dc3091a84ae8b1daed1462125c0f30a12d"
	I1219 02:27:26.704287   22297 cri.go:92] found id: "0775e7ddec4bddf3bf1f8c6146bfa87a31eab965b4cd4b9cdcb81072b3c193df"
	I1219 02:27:26.704290   22297 cri.go:92] found id: "9de7849d09931e8545cb193c0c133e4f48a2478e312f66c751d66daac9199580"
	I1219 02:27:26.704293   22297 cri.go:92] found id: "bbe76e37f22c483601a1b3e1967ff9fb850315576f8e7e47a90c7ed1bab593f3"
	I1219 02:27:26.704296   22297 cri.go:92] found id: "483e903265e322c82b7cfeb6c2cd6fdfb8900212d86fc292511093c09ce04d07"
	I1219 02:27:26.704298   22297 cri.go:92] found id: "a51f3eaf36dba0ef9d223354007cff8f566a886628eed2756e88a76fb84d455f"
	I1219 02:27:26.704301   22297 cri.go:92] found id: "fcf4200f75e689843ddc922c6d57acdb381cebd9891fa659a6990df808c09ea5"
	I1219 02:27:26.704304   22297 cri.go:92] found id: "73473c7fc9bbe226d61054cd86b0c64ebb4b011155ac700760284f1b9be79ac4"
	I1219 02:27:26.704306   22297 cri.go:92] found id: "e6fd793aa75bd52aacacfead162878e4b1933f59937ea6ddfc7b0e7c0084361a"
	I1219 02:27:26.704310   22297 cri.go:92] found id: "8cc6b1da7c4c1244ed3e67c38c6672283870866dfd11de667780d4667c5ce702"
	I1219 02:27:26.704313   22297 cri.go:92] found id: ""
	I1219 02:27:26.704352   22297 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 02:27:26.718972   22297 out.go:203] 
	W1219 02:27:26.720353   22297 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:27:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:27:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 02:27:26.720388   22297 out.go:285] * 
	* 
	W1219 02:27:26.723461   22297 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 02:27:26.724758   22297 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-791857 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-791857 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-791857 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (249.640797ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:27:26.785656   22361 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:27:26.785993   22361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:27:26.786003   22361 out.go:374] Setting ErrFile to fd 2...
	I1219 02:27:26.786008   22361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:27:26.786235   22361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:27:26.786469   22361 mustload.go:66] Loading cluster: addons-791857
	I1219 02:27:26.786791   22361 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:27:26.786810   22361 addons.go:638] checking whether the cluster is paused
	I1219 02:27:26.786891   22361 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:27:26.786903   22361 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:27:26.787343   22361 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:27:26.805285   22361 ssh_runner.go:195] Run: systemctl --version
	I1219 02:27:26.805339   22361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:27:26.823156   22361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:27:26.925073   22361 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 02:27:26.925159   22361 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 02:27:26.954501   22361 cri.go:92] found id: "82083cc2b8ec2b9f35c8877a2b88e8140201a847ce5fcc112fb8edde1bd778a9"
	I1219 02:27:26.954524   22361 cri.go:92] found id: "e7ab741310c7159227dc4626e72a83ac3b188e511b198b4eecbe1fbfa72a6a1d"
	I1219 02:27:26.954530   22361 cri.go:92] found id: "96a4c77bc9411da812fbbdb05f10e7f4626e9117b182160f7f703c6ca69da438"
	I1219 02:27:26.954537   22361 cri.go:92] found id: "3b4d17ba4256243dc128f083e9b674110dcf80d83f680893f8d1560a63f34ce8"
	I1219 02:27:26.954541   22361 cri.go:92] found id: "3da71007b4d24d06267e77306cc089ffb24dd26395c3cf25a087bebddec346df"
	I1219 02:27:26.954545   22361 cri.go:92] found id: "464d91d87ed3b2836ea8bd02787437d0615fd0b3715165642f0adfcf6734796a"
	I1219 02:27:26.954549   22361 cri.go:92] found id: "3952448da55ae48b025692fb6527671c951ceba9c3201a0cfc26c58f34736160"
	I1219 02:27:26.954553   22361 cri.go:92] found id: "1bb6f09e7568eb35fc4a1fa4554a64e2a8a2dc5103e913d143cf2351ffea0912"
	I1219 02:27:26.954557   22361 cri.go:92] found id: "889116c0a9d403ff95107f15b2cb4bc324e04429f5963a9814e1ead83b3ec0d5"
	I1219 02:27:26.954564   22361 cri.go:92] found id: "258da604e725d4407e60a6c8389099d1393afa4e245dedbe89d9ec3b45b908c6"
	I1219 02:27:26.954569   22361 cri.go:92] found id: "0576bfa9d823e389e837b502601bc350a20a0ef1b326b80a5a3bdff07282f675"
	I1219 02:27:26.954573   22361 cri.go:92] found id: "97ca5f9b244b723133d2ee7e9cf88c9ff3e0669e3f5144dc9d65f11e422589b0"
	I1219 02:27:26.954586   22361 cri.go:92] found id: "6784d80b9a46542b3e645c6e8d4b04def8668221cabc6ecdcfad0991c2fec046"
	I1219 02:27:26.954595   22361 cri.go:92] found id: "88c063c48a86c4d6b60c5e780eedfbcaaece7ef28eef4a4eed50297457b2f0cd"
	I1219 02:27:26.954599   22361 cri.go:92] found id: "5a1a1413ec4ac23fec4e18501c1dae0c954c686f19b103ea385a88927ea6b4dc"
	I1219 02:27:26.954616   22361 cri.go:92] found id: "8d463ddbcc194b758faa79280521c0dc3091a84ae8b1daed1462125c0f30a12d"
	I1219 02:27:26.954622   22361 cri.go:92] found id: "0775e7ddec4bddf3bf1f8c6146bfa87a31eab965b4cd4b9cdcb81072b3c193df"
	I1219 02:27:26.954626   22361 cri.go:92] found id: "9de7849d09931e8545cb193c0c133e4f48a2478e312f66c751d66daac9199580"
	I1219 02:27:26.954630   22361 cri.go:92] found id: "bbe76e37f22c483601a1b3e1967ff9fb850315576f8e7e47a90c7ed1bab593f3"
	I1219 02:27:26.954634   22361 cri.go:92] found id: "483e903265e322c82b7cfeb6c2cd6fdfb8900212d86fc292511093c09ce04d07"
	I1219 02:27:26.954639   22361 cri.go:92] found id: "a51f3eaf36dba0ef9d223354007cff8f566a886628eed2756e88a76fb84d455f"
	I1219 02:27:26.954646   22361 cri.go:92] found id: "fcf4200f75e689843ddc922c6d57acdb381cebd9891fa659a6990df808c09ea5"
	I1219 02:27:26.954651   22361 cri.go:92] found id: "73473c7fc9bbe226d61054cd86b0c64ebb4b011155ac700760284f1b9be79ac4"
	I1219 02:27:26.954659   22361 cri.go:92] found id: "e6fd793aa75bd52aacacfead162878e4b1933f59937ea6ddfc7b0e7c0084361a"
	I1219 02:27:26.954665   22361 cri.go:92] found id: "8cc6b1da7c4c1244ed3e67c38c6672283870866dfd11de667780d4667c5ce702"
	I1219 02:27:26.954672   22361 cri.go:92] found id: ""
	I1219 02:27:26.954737   22361 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 02:27:26.968870   22361 out.go:203] 
	W1219 02:27:26.970226   22361 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:27:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:27:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 02:27:26.970247   22361 out.go:285] * 
	* 
	W1219 02:27:26.973344   22361 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 02:27:26.974821   22361 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-791857 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (31.09s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-791857 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-791857 --alsologtostderr -v=1: exit status 11 (276.461542ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:26:48.084267   18596 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:26:48.084404   18596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:26:48.084415   18596 out.go:374] Setting ErrFile to fd 2...
	I1219 02:26:48.084419   18596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:26:48.084616   18596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:26:48.084970   18596 mustload.go:66] Loading cluster: addons-791857
	I1219 02:26:48.085412   18596 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:26:48.085440   18596 addons.go:638] checking whether the cluster is paused
	I1219 02:26:48.085621   18596 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:26:48.085641   18596 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:26:48.086253   18596 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:26:48.106062   18596 ssh_runner.go:195] Run: systemctl --version
	I1219 02:26:48.106121   18596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:26:48.126748   18596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:26:48.229774   18596 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 02:26:48.229897   18596 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 02:26:48.260632   18596 cri.go:92] found id: "e7ab741310c7159227dc4626e72a83ac3b188e511b198b4eecbe1fbfa72a6a1d"
	I1219 02:26:48.260657   18596 cri.go:92] found id: "96a4c77bc9411da812fbbdb05f10e7f4626e9117b182160f7f703c6ca69da438"
	I1219 02:26:48.260661   18596 cri.go:92] found id: "3b4d17ba4256243dc128f083e9b674110dcf80d83f680893f8d1560a63f34ce8"
	I1219 02:26:48.260664   18596 cri.go:92] found id: "3da71007b4d24d06267e77306cc089ffb24dd26395c3cf25a087bebddec346df"
	I1219 02:26:48.260680   18596 cri.go:92] found id: "464d91d87ed3b2836ea8bd02787437d0615fd0b3715165642f0adfcf6734796a"
	I1219 02:26:48.260686   18596 cri.go:92] found id: "3952448da55ae48b025692fb6527671c951ceba9c3201a0cfc26c58f34736160"
	I1219 02:26:48.260690   18596 cri.go:92] found id: "1bb6f09e7568eb35fc4a1fa4554a64e2a8a2dc5103e913d143cf2351ffea0912"
	I1219 02:26:48.260728   18596 cri.go:92] found id: "889116c0a9d403ff95107f15b2cb4bc324e04429f5963a9814e1ead83b3ec0d5"
	I1219 02:26:48.260734   18596 cri.go:92] found id: "258da604e725d4407e60a6c8389099d1393afa4e245dedbe89d9ec3b45b908c6"
	I1219 02:26:48.260742   18596 cri.go:92] found id: "0576bfa9d823e389e837b502601bc350a20a0ef1b326b80a5a3bdff07282f675"
	I1219 02:26:48.260753   18596 cri.go:92] found id: "97ca5f9b244b723133d2ee7e9cf88c9ff3e0669e3f5144dc9d65f11e422589b0"
	I1219 02:26:48.260758   18596 cri.go:92] found id: "6784d80b9a46542b3e645c6e8d4b04def8668221cabc6ecdcfad0991c2fec046"
	I1219 02:26:48.260762   18596 cri.go:92] found id: "88c063c48a86c4d6b60c5e780eedfbcaaece7ef28eef4a4eed50297457b2f0cd"
	I1219 02:26:48.260770   18596 cri.go:92] found id: "5a1a1413ec4ac23fec4e18501c1dae0c954c686f19b103ea385a88927ea6b4dc"
	I1219 02:26:48.260774   18596 cri.go:92] found id: "8d463ddbcc194b758faa79280521c0dc3091a84ae8b1daed1462125c0f30a12d"
	I1219 02:26:48.260782   18596 cri.go:92] found id: "0775e7ddec4bddf3bf1f8c6146bfa87a31eab965b4cd4b9cdcb81072b3c193df"
	I1219 02:26:48.260785   18596 cri.go:92] found id: "9de7849d09931e8545cb193c0c133e4f48a2478e312f66c751d66daac9199580"
	I1219 02:26:48.260789   18596 cri.go:92] found id: "bbe76e37f22c483601a1b3e1967ff9fb850315576f8e7e47a90c7ed1bab593f3"
	I1219 02:26:48.260792   18596 cri.go:92] found id: "483e903265e322c82b7cfeb6c2cd6fdfb8900212d86fc292511093c09ce04d07"
	I1219 02:26:48.260797   18596 cri.go:92] found id: "a51f3eaf36dba0ef9d223354007cff8f566a886628eed2756e88a76fb84d455f"
	I1219 02:26:48.260805   18596 cri.go:92] found id: "fcf4200f75e689843ddc922c6d57acdb381cebd9891fa659a6990df808c09ea5"
	I1219 02:26:48.260816   18596 cri.go:92] found id: "73473c7fc9bbe226d61054cd86b0c64ebb4b011155ac700760284f1b9be79ac4"
	I1219 02:26:48.260820   18596 cri.go:92] found id: "e6fd793aa75bd52aacacfead162878e4b1933f59937ea6ddfc7b0e7c0084361a"
	I1219 02:26:48.260825   18596 cri.go:92] found id: "8cc6b1da7c4c1244ed3e67c38c6672283870866dfd11de667780d4667c5ce702"
	I1219 02:26:48.260830   18596 cri.go:92] found id: ""
	I1219 02:26:48.260875   18596 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 02:26:48.278795   18596 out.go:203] 
	W1219 02:26:48.280186   18596 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:26:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:26:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 02:26:48.280253   18596 out.go:285] * 
	* 
	W1219 02:26:48.285129   18596 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 02:26:48.286603   18596 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-791857 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-791857
helpers_test.go:244: (dbg) docker inspect addons-791857:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f8c6486dcdf905ba469992334e6723f1c1a055a4dff1868fe30cba7b5fac673",
	        "Created": "2025-12-19T02:25:26.750670587Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 10952,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T02:25:26.783958646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/5f8c6486dcdf905ba469992334e6723f1c1a055a4dff1868fe30cba7b5fac673/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f8c6486dcdf905ba469992334e6723f1c1a055a4dff1868fe30cba7b5fac673/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f8c6486dcdf905ba469992334e6723f1c1a055a4dff1868fe30cba7b5fac673/hosts",
	        "LogPath": "/var/lib/docker/containers/5f8c6486dcdf905ba469992334e6723f1c1a055a4dff1868fe30cba7b5fac673/5f8c6486dcdf905ba469992334e6723f1c1a055a4dff1868fe30cba7b5fac673-json.log",
	        "Name": "/addons-791857",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-791857:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-791857",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5f8c6486dcdf905ba469992334e6723f1c1a055a4dff1868fe30cba7b5fac673",
	                "LowerDir": "/var/lib/docker/overlay2/42ba77f82f3c4f7d364e625b82335250adc58df585512dacfc98666463bf98fa-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/42ba77f82f3c4f7d364e625b82335250adc58df585512dacfc98666463bf98fa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/42ba77f82f3c4f7d364e625b82335250adc58df585512dacfc98666463bf98fa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/42ba77f82f3c4f7d364e625b82335250adc58df585512dacfc98666463bf98fa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-791857",
	                "Source": "/var/lib/docker/volumes/addons-791857/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-791857",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-791857",
	                "name.minikube.sigs.k8s.io": "addons-791857",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "838464d46434f3e6463480e0b499a0493111eab4df4e3ed6e548d8abe7075335",
	            "SandboxKey": "/var/run/docker/netns/838464d46434",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-791857": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "002009ae9763ecdde824289a99be22a5caad9b24ec2d08c4f4654f0b0a112e69",
	                    "EndpointID": "c7c4e5f1a40685763891b4a90d2cf2e6d789f276ded6a42c5b647bbe1445ce01",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "fa:0e:12:84:f7:6e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-791857",
	                        "5f8c6486dcdf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-791857 -n addons-791857
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-791857 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-791857 logs -n 25: (1.160369721s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-940312 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-940312   │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │ 19 Dec 25 02:24 UTC │
	│ delete  │ -p download-only-940312                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-940312   │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │ 19 Dec 25 02:24 UTC │
	│ start   │ -o=json --download-only -p download-only-516964 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-516964   │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │ 19 Dec 25 02:24 UTC │
	│ delete  │ -p download-only-516964                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-516964   │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │ 19 Dec 25 02:24 UTC │
	│ start   │ -o=json --download-only -p download-only-494334 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                           │ download-only-494334   │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-494334                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-494334   │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-940312                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-940312   │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-516964                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-516964   │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-494334                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-494334   │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ start   │ --download-only -p download-docker-321917 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-321917 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	│ delete  │ -p download-docker-321917                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-321917 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ start   │ --download-only -p binary-mirror-072289 --alsologtostderr --binary-mirror http://127.0.0.1:46753 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-072289   │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	│ delete  │ -p binary-mirror-072289                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-072289   │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ addons  │ disable dashboard -p addons-791857                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-791857          │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	│ addons  │ enable dashboard -p addons-791857                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-791857          │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	│ start   │ -p addons-791857 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-791857          │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:26 UTC │
	│ addons  │ addons-791857 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-791857          │ jenkins │ v1.37.0 │ 19 Dec 25 02:26 UTC │                     │
	│ addons  │ addons-791857 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-791857          │ jenkins │ v1.37.0 │ 19 Dec 25 02:26 UTC │                     │
	│ addons  │ enable headlamp -p addons-791857 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-791857          │ jenkins │ v1.37.0 │ 19 Dec 25 02:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:25:04
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:25:04.233753   10286 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:25:04.233984   10286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:25:04.233991   10286 out.go:374] Setting ErrFile to fd 2...
	I1219 02:25:04.233995   10286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:25:04.234162   10286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:25:04.234629   10286 out.go:368] Setting JSON to false
	I1219 02:25:04.235390   10286 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":455,"bootTime":1766110649,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:25:04.235439   10286 start.go:143] virtualization: kvm guest
	I1219 02:25:04.237276   10286 out.go:179] * [addons-791857] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:25:04.238460   10286 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:25:04.238462   10286 notify.go:221] Checking for updates...
	I1219 02:25:04.240817   10286 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:25:04.241981   10286 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 02:25:04.243184   10286 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 02:25:04.244275   10286 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:25:04.245336   10286 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:25:04.246562   10286 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:25:04.268847   10286 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:25:04.268925   10286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:25:04.321388   10286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-19 02:25:04.311323087 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:25:04.321499   10286 docker.go:319] overlay module found
	I1219 02:25:04.323816   10286 out.go:179] * Using the docker driver based on user configuration
	I1219 02:25:04.324818   10286 start.go:309] selected driver: docker
	I1219 02:25:04.324832   10286 start.go:928] validating driver "docker" against <nil>
	I1219 02:25:04.324842   10286 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:25:04.325376   10286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:25:04.377850   10286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-19 02:25:04.368832387 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:25:04.378045   10286 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 02:25:04.378236   10286 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 02:25:04.379857   10286 out.go:179] * Using Docker driver with root privileges
	I1219 02:25:04.381096   10286 cni.go:84] Creating CNI manager for ""
	I1219 02:25:04.381200   10286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 02:25:04.381212   10286 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1219 02:25:04.381280   10286 start.go:353] cluster config:
	{Name:addons-791857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-791857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1219 02:25:04.382667   10286 out.go:179] * Starting "addons-791857" primary control-plane node in "addons-791857" cluster
	I1219 02:25:04.383963   10286 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 02:25:04.385169   10286 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 02:25:04.386477   10286 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 02:25:04.386518   10286 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 02:25:04.386530   10286 cache.go:65] Caching tarball of preloaded images
	I1219 02:25:04.386563   10286 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 02:25:04.386622   10286 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 02:25:04.386633   10286 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 02:25:04.386998   10286 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/config.json ...
	I1219 02:25:04.387024   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/config.json: {Name:mk2fa1c08becfda12e3568c02e4dcff816f2d73e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:04.404690   10286 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 to local cache
	I1219 02:25:04.404821   10286 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local cache directory
	I1219 02:25:04.404839   10286 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local cache directory, skipping pull
	I1219 02:25:04.404844   10286 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in cache, skipping pull
	I1219 02:25:04.404851   10286 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 as a tarball
	I1219 02:25:04.404858   10286 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 from local cache
	I1219 02:25:18.395521   10286 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 from cached tarball
	I1219 02:25:18.395561   10286 cache.go:243] Successfully downloaded all kic artifacts
	I1219 02:25:18.395610   10286 start.go:360] acquireMachinesLock for addons-791857: {Name:mke15be50e9dd63ff80b5d97d17892540ef58ee6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 02:25:18.395730   10286 start.go:364] duration metric: took 97.595µs to acquireMachinesLock for "addons-791857"
	I1219 02:25:18.395757   10286 start.go:93] Provisioning new machine with config: &{Name:addons-791857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-791857 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 02:25:18.395827   10286 start.go:125] createHost starting for "" (driver="docker")
	I1219 02:25:18.397541   10286 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1219 02:25:18.397783   10286 start.go:159] libmachine.API.Create for "addons-791857" (driver="docker")
	I1219 02:25:18.397821   10286 client.go:173] LocalClient.Create starting
	I1219 02:25:18.397912   10286 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem
	I1219 02:25:18.488696   10286 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem
	I1219 02:25:18.553284   10286 cli_runner.go:164] Run: docker network inspect addons-791857 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1219 02:25:18.570499   10286 cli_runner.go:211] docker network inspect addons-791857 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1219 02:25:18.570574   10286 network_create.go:284] running [docker network inspect addons-791857] to gather additional debugging logs...
	I1219 02:25:18.570593   10286 cli_runner.go:164] Run: docker network inspect addons-791857
	W1219 02:25:18.586398   10286 cli_runner.go:211] docker network inspect addons-791857 returned with exit code 1
	I1219 02:25:18.586427   10286 network_create.go:287] error running [docker network inspect addons-791857]: docker network inspect addons-791857: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-791857 not found
	I1219 02:25:18.586442   10286 network_create.go:289] output of [docker network inspect addons-791857]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-791857 not found
	
	** /stderr **
	I1219 02:25:18.586517   10286 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 02:25:18.602921   10286 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eca860}
	I1219 02:25:18.602970   10286 network_create.go:124] attempt to create docker network addons-791857 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1219 02:25:18.603020   10286 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-791857 addons-791857
	I1219 02:25:18.649061   10286 network_create.go:108] docker network addons-791857 192.168.49.0/24 created
	I1219 02:25:18.649088   10286 kic.go:121] calculated static IP "192.168.49.2" for the "addons-791857" container
	I1219 02:25:18.649167   10286 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1219 02:25:18.665603   10286 cli_runner.go:164] Run: docker volume create addons-791857 --label name.minikube.sigs.k8s.io=addons-791857 --label created_by.minikube.sigs.k8s.io=true
	I1219 02:25:18.682555   10286 oci.go:103] Successfully created a docker volume addons-791857
	I1219 02:25:18.682626   10286 cli_runner.go:164] Run: docker run --rm --name addons-791857-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-791857 --entrypoint /usr/bin/test -v addons-791857:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1219 02:25:22.841900   10286 cli_runner.go:217] Completed: docker run --rm --name addons-791857-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-791857 --entrypoint /usr/bin/test -v addons-791857:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib: (4.159232605s)
	I1219 02:25:22.841935   10286 oci.go:107] Successfully prepared a docker volume addons-791857
	I1219 02:25:22.841999   10286 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 02:25:22.842011   10286 kic.go:194] Starting extracting preloaded images to volume ...
	I1219 02:25:22.842052   10286 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-791857:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1219 02:25:26.678998   10286 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-791857:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.836891023s)
	I1219 02:25:26.679032   10286 kic.go:203] duration metric: took 3.837018565s to extract preloaded images to volume ...
	W1219 02:25:26.679181   10286 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1219 02:25:26.679259   10286 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1219 02:25:26.679314   10286 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1219 02:25:26.734047   10286 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-791857 --name addons-791857 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-791857 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-791857 --network addons-791857 --ip 192.168.49.2 --volume addons-791857:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1219 02:25:27.018248   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Running}}
	I1219 02:25:27.035913   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:27.054936   10286 cli_runner.go:164] Run: docker exec addons-791857 stat /var/lib/dpkg/alternatives/iptables
	I1219 02:25:27.103938   10286 oci.go:144] the created container "addons-791857" has a running status.
	I1219 02:25:27.103971   10286 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa...
	I1219 02:25:27.175422   10286 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1219 02:25:27.201189   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:27.218240   10286 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1219 02:25:27.218262   10286 kic_runner.go:114] Args: [docker exec --privileged addons-791857 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1219 02:25:27.285821   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:27.311266   10286 machine.go:94] provisionDockerMachine start ...
	I1219 02:25:27.311467   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:27.335022   10286 main.go:144] libmachine: Using SSH client type: native
	I1219 02:25:27.335271   10286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1219 02:25:27.335296   10286 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 02:25:27.483961   10286 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-791857
	
	I1219 02:25:27.483992   10286 ubuntu.go:182] provisioning hostname "addons-791857"
	I1219 02:25:27.484056   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:27.503113   10286 main.go:144] libmachine: Using SSH client type: native
	I1219 02:25:27.503324   10286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1219 02:25:27.503336   10286 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-791857 && echo "addons-791857" | sudo tee /etc/hostname
	I1219 02:25:27.658171   10286 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-791857
	
	I1219 02:25:27.658277   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:27.678221   10286 main.go:144] libmachine: Using SSH client type: native
	I1219 02:25:27.678440   10286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1219 02:25:27.678455   10286 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-791857' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-791857/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-791857' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 02:25:27.822244   10286 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 02:25:27.822271   10286 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 02:25:27.822289   10286 ubuntu.go:190] setting up certificates
	I1219 02:25:27.822298   10286 provision.go:84] configureAuth start
	I1219 02:25:27.822347   10286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-791857
	I1219 02:25:27.838923   10286 provision.go:143] copyHostCerts
	I1219 02:25:27.838995   10286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 02:25:27.839116   10286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 02:25:27.839186   10286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 02:25:27.839256   10286 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.addons-791857 san=[127.0.0.1 192.168.49.2 addons-791857 localhost minikube]
	I1219 02:25:27.981743   10286 provision.go:177] copyRemoteCerts
	I1219 02:25:27.981798   10286 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 02:25:27.981830   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:27.998222   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:28.099838   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1219 02:25:28.117695   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 02:25:28.133574   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 02:25:28.149581   10286 provision.go:87] duration metric: took 327.266981ms to configureAuth
	I1219 02:25:28.149614   10286 ubuntu.go:206] setting minikube options for container-runtime
	I1219 02:25:28.149805   10286 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:25:28.149920   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:28.166767   10286 main.go:144] libmachine: Using SSH client type: native
	I1219 02:25:28.166977   10286 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1219 02:25:28.166996   10286 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 02:25:28.442308   10286 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 02:25:28.442335   10286 machine.go:97] duration metric: took 1.131045992s to provisionDockerMachine
	I1219 02:25:28.442346   10286 client.go:176] duration metric: took 10.044512243s to LocalClient.Create
	I1219 02:25:28.442368   10286 start.go:167] duration metric: took 10.044583292s to libmachine.API.Create "addons-791857"
	I1219 02:25:28.442378   10286 start.go:293] postStartSetup for "addons-791857" (driver="docker")
	I1219 02:25:28.442392   10286 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 02:25:28.442443   10286 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 02:25:28.442481   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:28.460137   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:28.562861   10286 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 02:25:28.566301   10286 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 02:25:28.566336   10286 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 02:25:28.566350   10286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 02:25:28.566401   10286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 02:25:28.566424   10286 start.go:296] duration metric: took 124.03895ms for postStartSetup
	I1219 02:25:28.566685   10286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-791857
	I1219 02:25:28.584664   10286 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/config.json ...
	I1219 02:25:28.584935   10286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 02:25:28.584975   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:28.602741   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:28.699523   10286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 02:25:28.703872   10286 start.go:128] duration metric: took 10.308030596s to createHost
	I1219 02:25:28.703902   10286 start.go:83] releasing machines lock for "addons-791857", held for 10.308157941s
	I1219 02:25:28.703966   10286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-791857
	I1219 02:25:28.722345   10286 ssh_runner.go:195] Run: cat /version.json
	I1219 02:25:28.722390   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:28.722445   10286 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 02:25:28.722529   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:28.740225   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:28.740553   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:28.890252   10286 ssh_runner.go:195] Run: systemctl --version
	I1219 02:25:28.896596   10286 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 02:25:28.928747   10286 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 02:25:28.933137   10286 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 02:25:28.933211   10286 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 02:25:28.957836   10286 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 02:25:28.957858   10286 start.go:496] detecting cgroup driver to use...
	I1219 02:25:28.957886   10286 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 02:25:28.957921   10286 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 02:25:28.973556   10286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 02:25:28.985182   10286 docker.go:218] disabling cri-docker service (if available) ...
	I1219 02:25:28.985231   10286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 02:25:29.001110   10286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 02:25:29.018195   10286 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 02:25:29.098268   10286 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 02:25:29.184999   10286 docker.go:234] disabling docker service ...
	I1219 02:25:29.185059   10286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 02:25:29.202849   10286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 02:25:29.214848   10286 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 02:25:29.297717   10286 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 02:25:29.376887   10286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 02:25:29.388775   10286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 02:25:29.402373   10286 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 02:25:29.402425   10286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 02:25:29.412126   10286 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 02:25:29.412183   10286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 02:25:29.420408   10286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 02:25:29.428468   10286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 02:25:29.436597   10286 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 02:25:29.444031   10286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 02:25:29.451938   10286 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 02:25:29.464462   10286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 02:25:29.472834   10286 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 02:25:29.479741   10286 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1219 02:25:29.479784   10286 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1219 02:25:29.491090   10286 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 02:25:29.498018   10286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 02:25:29.576730   10286 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 02:25:29.701964   10286 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 02:25:29.702031   10286 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 02:25:29.705732   10286 start.go:564] Will wait 60s for crictl version
	I1219 02:25:29.705777   10286 ssh_runner.go:195] Run: which crictl
	I1219 02:25:29.709240   10286 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 02:25:29.734227   10286 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 02:25:29.734349   10286 ssh_runner.go:195] Run: crio --version
	I1219 02:25:29.760835   10286 ssh_runner.go:195] Run: crio --version
	I1219 02:25:29.788528   10286 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1219 02:25:29.789820   10286 cli_runner.go:164] Run: docker network inspect addons-791857 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 02:25:29.806005   10286 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1219 02:25:29.809920   10286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 02:25:29.819487   10286 kubeadm.go:884] updating cluster {Name:addons-791857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-791857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 02:25:29.819588   10286 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 02:25:29.819627   10286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 02:25:29.850327   10286 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 02:25:29.850350   10286 crio.go:433] Images already preloaded, skipping extraction
	I1219 02:25:29.850395   10286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 02:25:29.874266   10286 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 02:25:29.874290   10286 cache_images.go:86] Images are preloaded, skipping loading
	I1219 02:25:29.874300   10286 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1219 02:25:29.874396   10286 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-791857 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-791857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 02:25:29.874470   10286 ssh_runner.go:195] Run: crio config
	I1219 02:25:29.918571   10286 cni.go:84] Creating CNI manager for ""
	I1219 02:25:29.918593   10286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 02:25:29.918611   10286 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 02:25:29.918630   10286 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-791857 NodeName:addons-791857 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 02:25:29.918769   10286 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-791857"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 02:25:29.918828   10286 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 02:25:29.926910   10286 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 02:25:29.926971   10286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 02:25:29.934414   10286 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1219 02:25:29.946643   10286 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 02:25:29.962193   10286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1219 02:25:29.974675   10286 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1219 02:25:29.978112   10286 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 02:25:29.987559   10286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 02:25:30.065730   10286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 02:25:30.089996   10286 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857 for IP: 192.168.49.2
	I1219 02:25:30.090020   10286 certs.go:195] generating shared ca certs ...
	I1219 02:25:30.090039   10286 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.090167   10286 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 02:25:30.125089   10286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt ...
	I1219 02:25:30.125122   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt: {Name:mk93220370fd0ee656707aaf7bad7ac75f80cf62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.125297   10286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key ...
	I1219 02:25:30.125314   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key: {Name:mk6464b375ea664b0b7e6aac31ae3239976bcb34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.125419   10286 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 02:25:30.246455   10286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt ...
	I1219 02:25:30.246486   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt: {Name:mk640a70a316662d907929b9a6ee35a513d55016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.246673   10286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key ...
	I1219 02:25:30.246690   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key: {Name:mk70fcf1f094cda035aaf61abcc62f5350f14d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.246815   10286 certs.go:257] generating profile certs ...
	I1219 02:25:30.246889   10286 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.key
	I1219 02:25:30.246909   10286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt with IP's: []
	I1219 02:25:30.333708   10286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt ...
	I1219 02:25:30.333743   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: {Name:mk47664c75fc7928eb0378a2045a0e3158f05ea2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.333940   10286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.key ...
	I1219 02:25:30.333965   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.key: {Name:mk78f96ac0759c1b26f6587875ae07d3e99d23a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.334075   10286 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.key.0bee3924
	I1219 02:25:30.334099   10286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.crt.0bee3924 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1219 02:25:30.388337   10286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.crt.0bee3924 ...
	I1219 02:25:30.388369   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.crt.0bee3924: {Name:mkaf9f8498bba7027ed427dbd927c08f82436f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.388563   10286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.key.0bee3924 ...
	I1219 02:25:30.388582   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.key.0bee3924: {Name:mkdf31ea46a1019e3fe6ae1a8ee9803300003eb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.388697   10286 certs.go:382] copying /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.crt.0bee3924 -> /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.crt
	I1219 02:25:30.388829   10286 certs.go:386] copying /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.key.0bee3924 -> /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.key
	I1219 02:25:30.388920   10286 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/proxy-client.key
	I1219 02:25:30.388959   10286 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/proxy-client.crt with IP's: []
	I1219 02:25:30.479358   10286 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/proxy-client.crt ...
	I1219 02:25:30.479392   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/proxy-client.crt: {Name:mka5b06da2b5b4397dd3d6cfa800284c5f8ab7fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.479583   10286 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/proxy-client.key ...
	I1219 02:25:30.479608   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/proxy-client.key: {Name:mk043929d09112de1348210222f596debf0d0a3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:30.479825   10286 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 02:25:30.479885   10286 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 02:25:30.479925   10286 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 02:25:30.479970   10286 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 02:25:30.480560   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 02:25:30.498120   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 02:25:30.514695   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 02:25:30.531830   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 02:25:30.548964   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1219 02:25:30.565382   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 02:25:30.581754   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 02:25:30.597568   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 02:25:30.613572   10286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 02:25:30.631783   10286 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 02:25:30.643313   10286 ssh_runner.go:195] Run: openssl version
	I1219 02:25:30.649137   10286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 02:25:30.655820   10286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 02:25:30.665419   10286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 02:25:30.668902   10286 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 02:25:30.668959   10286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 02:25:30.701980   10286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 02:25:30.709501   10286 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 02:25:30.716611   10286 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 02:25:30.719944   10286 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1219 02:25:30.719987   10286 kubeadm.go:401] StartCluster: {Name:addons-791857 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-791857 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:25:30.720048   10286 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 02:25:30.720084   10286 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 02:25:30.746307   10286 cri.go:92] found id: ""
	I1219 02:25:30.746383   10286 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 02:25:30.754276   10286 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 02:25:30.762093   10286 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1219 02:25:30.762158   10286 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 02:25:30.769634   10286 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 02:25:30.769657   10286 kubeadm.go:158] found existing configuration files:
	
	I1219 02:25:30.769712   10286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1219 02:25:30.776776   10286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 02:25:30.776834   10286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 02:25:30.783694   10286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1219 02:25:30.790973   10286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 02:25:30.791033   10286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 02:25:30.798795   10286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1219 02:25:30.805922   10286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 02:25:30.805979   10286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 02:25:30.812654   10286 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1219 02:25:30.819775   10286 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 02:25:30.819831   10286 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 02:25:30.826696   10286 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1219 02:25:30.892858   10286 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1219 02:25:30.947250   10286 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1219 02:25:40.372444   10286 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1219 02:25:40.372541   10286 kubeadm.go:319] [preflight] Running pre-flight checks
	I1219 02:25:40.372651   10286 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1219 02:25:40.372747   10286 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1219 02:25:40.372793   10286 kubeadm.go:319] OS: Linux
	I1219 02:25:40.372857   10286 kubeadm.go:319] CGROUPS_CPU: enabled
	I1219 02:25:40.372945   10286 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1219 02:25:40.373022   10286 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1219 02:25:40.373096   10286 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1219 02:25:40.373182   10286 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1219 02:25:40.373231   10286 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1219 02:25:40.373278   10286 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1219 02:25:40.373328   10286 kubeadm.go:319] CGROUPS_IO: enabled
	I1219 02:25:40.373432   10286 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1219 02:25:40.373578   10286 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1219 02:25:40.373764   10286 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1219 02:25:40.373830   10286 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1219 02:25:40.375764   10286 out.go:252]   - Generating certificates and keys ...
	I1219 02:25:40.375852   10286 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1219 02:25:40.375939   10286 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1219 02:25:40.376045   10286 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1219 02:25:40.376117   10286 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1219 02:25:40.376195   10286 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1219 02:25:40.376265   10286 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1219 02:25:40.376347   10286 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1219 02:25:40.376479   10286 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-791857 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1219 02:25:40.376570   10286 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1219 02:25:40.376741   10286 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-791857 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1219 02:25:40.376852   10286 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1219 02:25:40.376915   10286 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1219 02:25:40.376957   10286 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1219 02:25:40.377007   10286 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1219 02:25:40.377053   10286 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1219 02:25:40.377117   10286 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1219 02:25:40.377190   10286 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1219 02:25:40.377285   10286 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1219 02:25:40.377365   10286 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1219 02:25:40.377481   10286 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1219 02:25:40.377571   10286 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1219 02:25:40.378737   10286 out.go:252]   - Booting up control plane ...
	I1219 02:25:40.378826   10286 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1219 02:25:40.378928   10286 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1219 02:25:40.379018   10286 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1219 02:25:40.379169   10286 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1219 02:25:40.379263   10286 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1219 02:25:40.379386   10286 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1219 02:25:40.379493   10286 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1219 02:25:40.379552   10286 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1219 02:25:40.379723   10286 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1219 02:25:40.379842   10286 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1219 02:25:40.379919   10286 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001369512s
	I1219 02:25:40.380038   10286 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1219 02:25:40.380149   10286 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1219 02:25:40.380273   10286 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1219 02:25:40.380384   10286 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1219 02:25:40.380495   10286 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.360265797s
	I1219 02:25:40.380594   10286 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.975499535s
	I1219 02:25:40.380692   10286 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501087246s
	I1219 02:25:40.380831   10286 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1219 02:25:40.380964   10286 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1219 02:25:40.381014   10286 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1219 02:25:40.381175   10286 kubeadm.go:319] [mark-control-plane] Marking the node addons-791857 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1219 02:25:40.381233   10286 kubeadm.go:319] [bootstrap-token] Using token: fc8dpx.s77uezw1ei6hvydq
	I1219 02:25:40.382476   10286 out.go:252]   - Configuring RBAC rules ...
	I1219 02:25:40.382576   10286 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1219 02:25:40.382648   10286 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1219 02:25:40.382796   10286 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1219 02:25:40.382912   10286 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1219 02:25:40.383021   10286 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1219 02:25:40.383100   10286 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1219 02:25:40.383202   10286 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1219 02:25:40.383246   10286 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1219 02:25:40.383286   10286 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1219 02:25:40.383291   10286 kubeadm.go:319] 
	I1219 02:25:40.383352   10286 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1219 02:25:40.383358   10286 kubeadm.go:319] 
	I1219 02:25:40.383421   10286 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1219 02:25:40.383427   10286 kubeadm.go:319] 
	I1219 02:25:40.383448   10286 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1219 02:25:40.383499   10286 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1219 02:25:40.383543   10286 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1219 02:25:40.383549   10286 kubeadm.go:319] 
	I1219 02:25:40.383601   10286 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1219 02:25:40.383612   10286 kubeadm.go:319] 
	I1219 02:25:40.383649   10286 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1219 02:25:40.383654   10286 kubeadm.go:319] 
	I1219 02:25:40.383707   10286 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1219 02:25:40.383771   10286 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1219 02:25:40.383829   10286 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1219 02:25:40.383838   10286 kubeadm.go:319] 
	I1219 02:25:40.383906   10286 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1219 02:25:40.384018   10286 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1219 02:25:40.384030   10286 kubeadm.go:319] 
	I1219 02:25:40.384150   10286 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fc8dpx.s77uezw1ei6hvydq \
	I1219 02:25:40.384253   10286 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e8b10e60f3db527579d1c34bb8f4e490eb8eee3e7862dee81a2c160635afa3a8 \
	I1219 02:25:40.384272   10286 kubeadm.go:319] 	--control-plane 
	I1219 02:25:40.384277   10286 kubeadm.go:319] 
	I1219 02:25:40.384357   10286 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1219 02:25:40.384365   10286 kubeadm.go:319] 
	I1219 02:25:40.384443   10286 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fc8dpx.s77uezw1ei6hvydq \
	I1219 02:25:40.384544   10286 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e8b10e60f3db527579d1c34bb8f4e490eb8eee3e7862dee81a2c160635afa3a8 
	I1219 02:25:40.384555   10286 cni.go:84] Creating CNI manager for ""
	I1219 02:25:40.384562   10286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 02:25:40.386493   10286 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1219 02:25:40.387478   10286 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1219 02:25:40.391679   10286 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1219 02:25:40.391714   10286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1219 02:25:40.404510   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1219 02:25:40.609593   10286 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 02:25:40.609685   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:40.609745   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-791857 minikube.k8s.io/updated_at=2025_12_19T02_25_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6 minikube.k8s.io/name=addons-791857 minikube.k8s.io/primary=true
	I1219 02:25:40.619284   10286 ops.go:34] apiserver oom_adj: -16
	I1219 02:25:40.681946   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:41.182990   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:41.682637   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:42.182360   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:42.682989   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:43.182968   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:43.682066   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:44.182537   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:44.682185   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:45.182024   10286 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 02:25:45.242045   10286 kubeadm.go:1114] duration metric: took 4.632411731s to wait for elevateKubeSystemPrivileges
	I1219 02:25:45.242084   10286 kubeadm.go:403] duration metric: took 14.522098487s to StartCluster
	I1219 02:25:45.242107   10286 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:45.242231   10286 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 02:25:45.242572   10286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:25:45.242801   10286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1219 02:25:45.242825   10286 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 02:25:45.242885   10286 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1219 02:25:45.243008   10286 addons.go:70] Setting yakd=true in profile "addons-791857"
	I1219 02:25:45.243020   10286 addons.go:70] Setting ingress-dns=true in profile "addons-791857"
	I1219 02:25:45.243038   10286 addons.go:70] Setting storage-provisioner=true in profile "addons-791857"
	I1219 02:25:45.243049   10286 addons.go:239] Setting addon storage-provisioner=true in "addons-791857"
	I1219 02:25:45.243054   10286 addons.go:239] Setting addon ingress-dns=true in "addons-791857"
	I1219 02:25:45.243049   10286 addons.go:70] Setting registry-creds=true in profile "addons-791857"
	I1219 02:25:45.243052   10286 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-791857"
	I1219 02:25:45.243073   10286 addons.go:239] Setting addon registry-creds=true in "addons-791857"
	I1219 02:25:45.243082   10286 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:25:45.243092   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.243099   10286 addons.go:70] Setting volcano=true in profile "addons-791857"
	I1219 02:25:45.243103   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.243111   10286 addons.go:239] Setting addon volcano=true in "addons-791857"
	I1219 02:25:45.243091   10286 addons.go:70] Setting gcp-auth=true in profile "addons-791857"
	I1219 02:25:45.243131   10286 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-791857"
	I1219 02:25:45.243140   10286 addons.go:70] Setting volumesnapshots=true in profile "addons-791857"
	I1219 02:25:45.243151   10286 mustload.go:66] Loading cluster: addons-791857
	I1219 02:25:45.243154   10286 addons.go:239] Setting addon volumesnapshots=true in "addons-791857"
	I1219 02:25:45.243161   10286 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-791857"
	I1219 02:25:45.243169   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.243179   10286 addons.go:70] Setting cloud-spanner=true in profile "addons-791857"
	I1219 02:25:45.243209   10286 addons.go:239] Setting addon cloud-spanner=true in "addons-791857"
	I1219 02:25:45.243226   10286 addons.go:70] Setting metrics-server=true in profile "addons-791857"
	I1219 02:25:45.243228   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.243238   10286 addons.go:239] Setting addon metrics-server=true in "addons-791857"
	I1219 02:25:45.243263   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.243379   10286 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:25:45.243632   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.243634   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.243637   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.243643   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.243660   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.243713   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.243824   10286 addons.go:70] Setting ingress=true in profile "addons-791857"
	I1219 02:25:45.243886   10286 addons.go:239] Setting addon ingress=true in "addons-791857"
	I1219 02:25:45.244038   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.243030   10286 addons.go:239] Setting addon yakd=true in "addons-791857"
	I1219 02:25:45.244337   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.244404   10286 addons.go:70] Setting inspektor-gadget=true in profile "addons-791857"
	I1219 02:25:45.244473   10286 addons.go:239] Setting addon inspektor-gadget=true in "addons-791857"
	I1219 02:25:45.243152   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.243083   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.243133   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.244672   10286 addons.go:70] Setting default-storageclass=true in profile "addons-791857"
	I1219 02:25:45.244751   10286 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-791857"
	I1219 02:25:45.244825   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.245079   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.244683   10286 addons.go:70] Setting registry=true in profile "addons-791857"
	I1219 02:25:45.245231   10286 addons.go:239] Setting addon registry=true in "addons-791857"
	I1219 02:25:45.245256   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.243091   10286 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-791857"
	I1219 02:25:45.245271   10286 out.go:179] * Verifying Kubernetes components...
	I1219 02:25:45.244142   10286 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-791857"
	I1219 02:25:45.245398   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.245530   10286 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-791857"
	I1219 02:25:45.245614   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.245838   10286 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-791857"
	I1219 02:25:45.243171   10286 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-791857"
	I1219 02:25:45.246140   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.247840   10286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 02:25:45.253094   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.253125   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.253776   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.254394   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.256467   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.256864   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.257688   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.258503   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.270074   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.304554   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.307305   10286 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1219 02:25:45.308510   10286 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1219 02:25:45.308545   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1219 02:25:45.308731   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.316473   10286 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1219 02:25:45.317673   10286 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 02:25:45.317696   10286 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 02:25:45.317761   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.318212   10286 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1219 02:25:45.320722   10286 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1219 02:25:45.320744   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1219 02:25:45.320802   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.324802   10286 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1219 02:25:45.327568   10286 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1219 02:25:45.327634   10286 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1219 02:25:45.327734   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.345126   10286 addons.go:239] Setting addon default-storageclass=true in "addons-791857"
	I1219 02:25:45.345178   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.345640   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.347958   10286 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1219 02:25:45.350383   10286 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1219 02:25:45.351311   10286 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1219 02:25:45.351330   10286 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1219 02:25:45.351395   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.352835   10286 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-791857"
	I1219 02:25:45.352883   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:45.353342   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:45.361434   10286 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1219 02:25:45.361765   10286 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1219 02:25:45.364073   10286 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1219 02:25:45.364101   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1219 02:25:45.364183   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.367284   10286 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1219 02:25:45.367303   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1219 02:25:45.367364   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.371299   10286 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1219 02:25:45.372332   10286 out.go:179]   - Using image docker.io/registry:3.0.0
	I1219 02:25:45.373565   10286 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 02:25:45.373593   10286 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1219 02:25:45.374438   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1219 02:25:45.373827   10286 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1219 02:25:45.374536   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.374934   10286 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 02:25:45.374981   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 02:25:45.375061   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.375579   10286 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1219 02:25:45.377620   10286 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1219 02:25:45.377663   10286 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1219 02:25:45.379181   10286 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1219 02:25:45.379221   10286 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1219 02:25:45.380935   10286 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1219 02:25:45.381346   10286 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1219 02:25:45.381847   10286 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1219 02:25:45.382736   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1219 02:25:45.383042   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.383258   10286 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1219 02:25:45.383273   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1219 02:25:45.383363   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.384237   10286 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1219 02:25:45.385481   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.385972   10286 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1219 02:25:45.387215   10286 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1219 02:25:45.387278   10286 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1219 02:25:45.387303   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1219 02:25:45.387348   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.388819   10286 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1219 02:25:45.390206   10286 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1219 02:25:45.390229   10286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1219 02:25:45.390320   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.399909   10286 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 02:25:45.399934   10286 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 02:25:45.400000   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.403770   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	W1219 02:25:45.404349   10286 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1219 02:25:45.419011   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.425427   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.428491   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.429806   10286 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1219 02:25:45.431774   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.433528   10286 out.go:179]   - Using image docker.io/busybox:stable
	I1219 02:25:45.434585   10286 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1219 02:25:45.434599   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1219 02:25:45.434767   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:45.436421   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.436459   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.443340   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.448221   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.449954   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.453728   10286 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1219 02:25:45.457853   10286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 02:25:45.464924   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	W1219 02:25:45.468202   10286 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1219 02:25:45.468235   10286 retry.go:31] will retry after 196.209396ms: ssh: handshake failed: EOF
	I1219 02:25:45.468427   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	W1219 02:25:45.470212   10286 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1219 02:25:45.470234   10286 retry.go:31] will retry after 154.168092ms: ssh: handshake failed: EOF
	I1219 02:25:45.471580   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.480007   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:45.546873   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1219 02:25:45.570341   10286 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1219 02:25:45.570363   10286 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1219 02:25:45.575466   10286 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 02:25:45.575580   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1219 02:25:45.593336   10286 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1219 02:25:45.593363   10286 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1219 02:25:45.593865   10286 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1219 02:25:45.593881   10286 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1219 02:25:45.600610   10286 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 02:25:45.600694   10286 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 02:25:45.605032   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1219 02:25:45.616411   10286 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1219 02:25:45.616439   10286 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1219 02:25:45.619451   10286 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1219 02:25:45.619535   10286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1219 02:25:45.627819   10286 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1219 02:25:45.627844   10286 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1219 02:25:45.634437   10286 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1219 02:25:45.634463   10286 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1219 02:25:45.639343   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1219 02:25:45.643760   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1219 02:25:45.646442   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1219 02:25:45.647353   10286 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 02:25:45.647371   10286 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 02:25:45.649779   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 02:25:45.660085   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1219 02:25:45.660541   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 02:25:45.679821   10286 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1219 02:25:45.679863   10286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1219 02:25:45.682056   10286 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1219 02:25:45.682080   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1219 02:25:45.698753   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 02:25:45.701074   10286 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1219 02:25:45.701102   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1219 02:25:45.711843   10286 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1219 02:25:45.711891   10286 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1219 02:25:45.757357   10286 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1219 02:25:45.757389   10286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1219 02:25:45.757786   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1219 02:25:45.767381   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1219 02:25:45.769534   10286 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1219 02:25:45.769560   10286 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1219 02:25:45.807847   10286 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1219 02:25:45.807879   10286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1219 02:25:45.811436   10286 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1219 02:25:45.811461   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1219 02:25:45.818685   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1219 02:25:45.888449   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1219 02:25:45.901869   10286 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1219 02:25:45.901908   10286 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1219 02:25:45.910025   10286 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1219 02:25:45.912039   10286 node_ready.go:35] waiting up to 6m0s for node "addons-791857" to be "Ready" ...
	I1219 02:25:45.944512   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1219 02:25:45.975635   10286 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1219 02:25:45.975665   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1219 02:25:46.050901   10286 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1219 02:25:46.050942   10286 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1219 02:25:46.119627   10286 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1219 02:25:46.119651   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1219 02:25:46.189653   10286 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1219 02:25:46.189797   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1219 02:25:46.251974   10286 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1219 02:25:46.252003   10286 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1219 02:25:46.317393   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1219 02:25:46.414450   10286 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-791857" context rescaled to 1 replicas
	I1219 02:25:46.655814   10286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.016427503s)
	I1219 02:25:46.655949   10286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.009468892s)
	I1219 02:25:46.656201   10286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.006364309s)
	W1219 02:25:46.684811   10286 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1219 02:25:46.713875   10286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.015073999s)
	I1219 02:25:46.713919   10286 addons.go:500] Verifying addon metrics-server=true in "addons-791857"
	I1219 02:25:46.714265   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:46.714559   10286 addons.go:500] Verifying addon registry=true in "addons-791857"
	I1219 02:25:46.714864   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:46.718470   10286 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-791857 service yakd-dashboard -n yakd-dashboard
	
	I1219 02:25:46.745682   10286 out.go:179] * Verifying registry addon...
	I1219 02:25:46.747771   10286 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1219 02:25:46.752498   10286 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1219 02:25:46.752667   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:47.251217   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:47.465988   10286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.577480598s)
	W1219 02:25:47.466026   10286 addons.go:479] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1219 02:25:47.466047   10286 retry.go:31] will retry after 257.944725ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1219 02:25:47.466146   10286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.521593847s)
	I1219 02:25:47.466184   10286 addons.go:500] Verifying addon ingress=true in "addons-791857"
	I1219 02:25:47.466470   10286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.149012992s)
	I1219 02:25:47.466502   10286 addons.go:500] Verifying addon csi-hostpath-driver=true in "addons-791857"
	I1219 02:25:47.466535   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:47.466808   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:47.494041   10286 out.go:179] * Verifying csi-hostpath-driver addon...
	I1219 02:25:47.494046   10286 out.go:179] * Verifying ingress addon...
	I1219 02:25:47.495678   10286 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1219 02:25:47.495869   10286 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1219 02:25:47.498787   10286 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1219 02:25:47.498806   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:47.498960   10286 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1219 02:25:47.498971   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:47.724772   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1219 02:25:47.751362   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1219 02:25:47.915016   10286 node_ready.go:57] node "addons-791857" has "Ready":"False" status (will retry)
	I1219 02:25:47.998813   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:47.998963   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:48.250864   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:48.498720   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:48.498839   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:48.751938   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:48.998754   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:48.998832   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:49.251391   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:49.499424   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:49.499585   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:49.751612   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1219 02:25:49.915196   10286 node_ready.go:57] node "addons-791857" has "Ready":"False" status (will retry)
	I1219 02:25:49.999428   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:49.999572   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:50.194823   10286 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.470005656s)
	I1219 02:25:50.250675   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:50.500213   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:50.500333   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:50.751277   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:50.999691   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:50.999912   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:51.251113   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:51.499274   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:51.499315   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:51.751508   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:51.999096   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:51.999125   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:52.251697   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1219 02:25:52.414573   10286 node_ready.go:57] node "addons-791857" has "Ready":"False" status (will retry)
	I1219 02:25:52.499550   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:52.499601   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:52.750654   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:52.920812   10286 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1219 02:25:52.920873   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:52.939133   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:52.999204   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:52.999244   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:53.053895   10286 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1219 02:25:53.066973   10286 addons.go:239] Setting addon gcp-auth=true in "addons-791857"
	I1219 02:25:53.067026   10286 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:25:53.067431   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:53.085183   10286 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1219 02:25:53.085280   10286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:25:53.103216   10286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:25:53.202906   10286 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1219 02:25:53.204436   10286 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1219 02:25:53.205857   10286 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1219 02:25:53.205878   10286 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1219 02:25:53.219607   10286 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1219 02:25:53.219632   10286 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1219 02:25:53.232531   10286 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1219 02:25:53.232553   10286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1219 02:25:53.245799   10286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1219 02:25:53.250297   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:53.499657   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:53.499686   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:53.549051   10286 addons.go:500] Verifying addon gcp-auth=true in "addons-791857"
	I1219 02:25:53.549410   10286 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:25:53.570687   10286 out.go:179] * Verifying gcp-auth addon...
	I1219 02:25:53.572917   10286 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1219 02:25:53.600239   10286 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1219 02:25:53.600264   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:53.750696   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:53.999215   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:53.999291   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:54.075721   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:54.250408   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1219 02:25:54.415128   10286 node_ready.go:57] node "addons-791857" has "Ready":"False" status (will retry)
	I1219 02:25:54.498741   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:54.498737   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:54.576844   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:54.751561   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:54.999117   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:54.999267   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:55.075803   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:55.250351   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:55.498916   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:55.499156   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:55.576603   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:55.751367   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:55.998393   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:55.998539   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:56.075845   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:56.250637   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1219 02:25:56.415289   10286 node_ready.go:57] node "addons-791857" has "Ready":"False" status (will retry)
	I1219 02:25:56.498858   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:56.499014   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:56.576277   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:56.751430   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:56.998722   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:56.999005   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:57.076318   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:57.250994   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:57.499466   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:57.499479   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:57.575884   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:57.750821   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:57.999311   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:57.999407   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:58.076062   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:58.251757   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:58.414739   10286 node_ready.go:49] node "addons-791857" is "Ready"
	I1219 02:25:58.414769   10286 node_ready.go:38] duration metric: took 12.502696641s for node "addons-791857" to be "Ready" ...
	I1219 02:25:58.414782   10286 api_server.go:52] waiting for apiserver process to appear ...
	I1219 02:25:58.414830   10286 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 02:25:58.430392   10286 api_server.go:72] duration metric: took 13.187531738s to wait for apiserver process to appear ...
	I1219 02:25:58.430444   10286 api_server.go:88] waiting for apiserver healthz status ...
	I1219 02:25:58.430470   10286 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1219 02:25:58.434504   10286 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1219 02:25:58.435458   10286 api_server.go:141] control plane version: v1.34.3
	I1219 02:25:58.435481   10286 api_server.go:131] duration metric: took 5.028863ms to wait for apiserver health ...
	I1219 02:25:58.435489   10286 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 02:25:58.439298   10286 system_pods.go:59] 20 kube-system pods found
	I1219 02:25:58.439325   10286 system_pods.go:61] "amd-gpu-device-plugin-j2hvw" [0bdaaa68-7ce6-41e2-8ef0-41bbe9cc8cbf] Pending
	I1219 02:25:58.439334   10286 system_pods.go:61] "coredns-66bc5c9577-w88lw" [4b95f3bb-adda-464d-acf4-50575055445a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 02:25:58.439340   10286 system_pods.go:61] "csi-hostpath-attacher-0" [e0fda27e-619f-40ee-a16b-42b17bf27f67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1219 02:25:58.439347   10286 system_pods.go:61] "csi-hostpath-resizer-0" [8d778eab-a0e4-47a8-a6de-29401f6be9a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1219 02:25:58.439353   10286 system_pods.go:61] "csi-hostpathplugin-stf22" [937a1596-c0e8-4cfd-acc7-e0cb17aadc8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1219 02:25:58.439357   10286 system_pods.go:61] "etcd-addons-791857" [686fcc6a-7965-4304-8858-41d979a138fc] Running
	I1219 02:25:58.439361   10286 system_pods.go:61] "kindnet-hdbwg" [7018db27-6306-4e23-8dc9-bf41d6198195] Running
	I1219 02:25:58.439366   10286 system_pods.go:61] "kube-apiserver-addons-791857" [f5ce6454-689d-4a6a-b221-cc7263395975] Running
	I1219 02:25:58.439372   10286 system_pods.go:61] "kube-controller-manager-addons-791857" [68702782-1743-429b-a9dc-94c63b062f7a] Running
	I1219 02:25:58.439384   10286 system_pods.go:61] "kube-ingress-dns-minikube" [c27bf96b-ebf9-426c-b1ba-385d03cbd356] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1219 02:25:58.439391   10286 system_pods.go:61] "kube-proxy-7g9j9" [b514b755-f501-4933-8719-43c3fdfc2d33] Running
	I1219 02:25:58.439395   10286 system_pods.go:61] "kube-scheduler-addons-791857" [053aa4f2-f0c7-4742-acc9-ca29276ca941] Running
	I1219 02:25:58.439399   10286 system_pods.go:61] "metrics-server-85b7d694d7-dnphb" [4e7c7cf4-66fc-44e5-b383-c17b49201f15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 02:25:58.439404   10286 system_pods.go:61] "nvidia-device-plugin-daemonset-9ngs4" [e76215e0-394e-4006-8d6b-bc14b485dc1f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1219 02:25:58.439411   10286 system_pods.go:61] "registry-6b586f9694-j2n8x" [2570cf1e-ddd4-4270-8adc-05df916b18a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1219 02:25:58.439415   10286 system_pods.go:61] "registry-creds-764b6fb674-xdlrg" [5bd93e94-0edf-490d-a978-5aa0fe38b999] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1219 02:25:58.439419   10286 system_pods.go:61] "registry-proxy-wsz68" [1af55b65-71a7-4559-b943-5db589afbf6f] Pending
	I1219 02:25:58.439423   10286 system_pods.go:61] "snapshot-controller-7d9fbc56b8-d42wc" [13df303d-c677-4d6f-9291-bfd820bf6b74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:58.439432   10286 system_pods.go:61] "snapshot-controller-7d9fbc56b8-v6xz5" [d91f0d4f-4cb4-4cf1-a698-688320a8623d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:58.439442   10286 system_pods.go:61] "storage-provisioner" [048f74f6-54f2-4dc5-9893-45ede6f7b3fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 02:25:58.439449   10286 system_pods.go:74] duration metric: took 3.954747ms to wait for pod list to return data ...
	I1219 02:25:58.439459   10286 default_sa.go:34] waiting for default service account to be created ...
	I1219 02:25:58.441294   10286 default_sa.go:45] found service account: "default"
	I1219 02:25:58.441314   10286 default_sa.go:55] duration metric: took 1.84934ms for default service account to be created ...
	I1219 02:25:58.441324   10286 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 02:25:58.448000   10286 system_pods.go:86] 20 kube-system pods found
	I1219 02:25:58.448030   10286 system_pods.go:89] "amd-gpu-device-plugin-j2hvw" [0bdaaa68-7ce6-41e2-8ef0-41bbe9cc8cbf] Pending
	I1219 02:25:58.448040   10286 system_pods.go:89] "coredns-66bc5c9577-w88lw" [4b95f3bb-adda-464d-acf4-50575055445a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 02:25:58.448049   10286 system_pods.go:89] "csi-hostpath-attacher-0" [e0fda27e-619f-40ee-a16b-42b17bf27f67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1219 02:25:58.448059   10286 system_pods.go:89] "csi-hostpath-resizer-0" [8d778eab-a0e4-47a8-a6de-29401f6be9a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1219 02:25:58.448067   10286 system_pods.go:89] "csi-hostpathplugin-stf22" [937a1596-c0e8-4cfd-acc7-e0cb17aadc8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1219 02:25:58.448075   10286 system_pods.go:89] "etcd-addons-791857" [686fcc6a-7965-4304-8858-41d979a138fc] Running
	I1219 02:25:58.448081   10286 system_pods.go:89] "kindnet-hdbwg" [7018db27-6306-4e23-8dc9-bf41d6198195] Running
	I1219 02:25:58.448086   10286 system_pods.go:89] "kube-apiserver-addons-791857" [f5ce6454-689d-4a6a-b221-cc7263395975] Running
	I1219 02:25:58.448091   10286 system_pods.go:89] "kube-controller-manager-addons-791857" [68702782-1743-429b-a9dc-94c63b062f7a] Running
	I1219 02:25:58.448099   10286 system_pods.go:89] "kube-ingress-dns-minikube" [c27bf96b-ebf9-426c-b1ba-385d03cbd356] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1219 02:25:58.448116   10286 system_pods.go:89] "kube-proxy-7g9j9" [b514b755-f501-4933-8719-43c3fdfc2d33] Running
	I1219 02:25:58.448122   10286 system_pods.go:89] "kube-scheduler-addons-791857" [053aa4f2-f0c7-4742-acc9-ca29276ca941] Running
	I1219 02:25:58.448129   10286 system_pods.go:89] "metrics-server-85b7d694d7-dnphb" [4e7c7cf4-66fc-44e5-b383-c17b49201f15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 02:25:58.448137   10286 system_pods.go:89] "nvidia-device-plugin-daemonset-9ngs4" [e76215e0-394e-4006-8d6b-bc14b485dc1f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1219 02:25:58.448144   10286 system_pods.go:89] "registry-6b586f9694-j2n8x" [2570cf1e-ddd4-4270-8adc-05df916b18a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1219 02:25:58.448152   10286 system_pods.go:89] "registry-creds-764b6fb674-xdlrg" [5bd93e94-0edf-490d-a978-5aa0fe38b999] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1219 02:25:58.448157   10286 system_pods.go:89] "registry-proxy-wsz68" [1af55b65-71a7-4559-b943-5db589afbf6f] Pending
	I1219 02:25:58.448168   10286 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d42wc" [13df303d-c677-4d6f-9291-bfd820bf6b74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:58.448177   10286 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v6xz5" [d91f0d4f-4cb4-4cf1-a698-688320a8623d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:58.448184   10286 system_pods.go:89] "storage-provisioner" [048f74f6-54f2-4dc5-9893-45ede6f7b3fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 02:25:58.448201   10286 retry.go:31] will retry after 247.149321ms: missing components: kube-dns
	I1219 02:25:58.499402   10286 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1219 02:25:58.499416   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:58.499429   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:58.600338   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:58.703364   10286 system_pods.go:86] 20 kube-system pods found
	I1219 02:25:58.703406   10286 system_pods.go:89] "amd-gpu-device-plugin-j2hvw" [0bdaaa68-7ce6-41e2-8ef0-41bbe9cc8cbf] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1219 02:25:58.703429   10286 system_pods.go:89] "coredns-66bc5c9577-w88lw" [4b95f3bb-adda-464d-acf4-50575055445a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 02:25:58.703441   10286 system_pods.go:89] "csi-hostpath-attacher-0" [e0fda27e-619f-40ee-a16b-42b17bf27f67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1219 02:25:58.703450   10286 system_pods.go:89] "csi-hostpath-resizer-0" [8d778eab-a0e4-47a8-a6de-29401f6be9a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1219 02:25:58.703465   10286 system_pods.go:89] "csi-hostpathplugin-stf22" [937a1596-c0e8-4cfd-acc7-e0cb17aadc8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1219 02:25:58.703471   10286 system_pods.go:89] "etcd-addons-791857" [686fcc6a-7965-4304-8858-41d979a138fc] Running
	I1219 02:25:58.703483   10286 system_pods.go:89] "kindnet-hdbwg" [7018db27-6306-4e23-8dc9-bf41d6198195] Running
	I1219 02:25:58.703489   10286 system_pods.go:89] "kube-apiserver-addons-791857" [f5ce6454-689d-4a6a-b221-cc7263395975] Running
	I1219 02:25:58.703509   10286 system_pods.go:89] "kube-controller-manager-addons-791857" [68702782-1743-429b-a9dc-94c63b062f7a] Running
	I1219 02:25:58.703523   10286 system_pods.go:89] "kube-ingress-dns-minikube" [c27bf96b-ebf9-426c-b1ba-385d03cbd356] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1219 02:25:58.703528   10286 system_pods.go:89] "kube-proxy-7g9j9" [b514b755-f501-4933-8719-43c3fdfc2d33] Running
	I1219 02:25:58.703656   10286 system_pods.go:89] "kube-scheduler-addons-791857" [053aa4f2-f0c7-4742-acc9-ca29276ca941] Running
	I1219 02:25:58.703672   10286 system_pods.go:89] "metrics-server-85b7d694d7-dnphb" [4e7c7cf4-66fc-44e5-b383-c17b49201f15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 02:25:58.703680   10286 system_pods.go:89] "nvidia-device-plugin-daemonset-9ngs4" [e76215e0-394e-4006-8d6b-bc14b485dc1f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1219 02:25:58.703693   10286 system_pods.go:89] "registry-6b586f9694-j2n8x" [2570cf1e-ddd4-4270-8adc-05df916b18a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1219 02:25:58.703722   10286 system_pods.go:89] "registry-creds-764b6fb674-xdlrg" [5bd93e94-0edf-490d-a978-5aa0fe38b999] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1219 02:25:58.703731   10286 system_pods.go:89] "registry-proxy-wsz68" [1af55b65-71a7-4559-b943-5db589afbf6f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1219 02:25:58.703747   10286 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d42wc" [13df303d-c677-4d6f-9291-bfd820bf6b74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:58.703761   10286 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v6xz5" [d91f0d4f-4cb4-4cf1-a698-688320a8623d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:58.703773   10286 system_pods.go:89] "storage-provisioner" [048f74f6-54f2-4dc5-9893-45ede6f7b3fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 02:25:58.703790   10286 retry.go:31] will retry after 372.451905ms: missing components: kube-dns
	I1219 02:25:58.800640   10286 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1219 02:25:58.800665   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:59.006163   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:59.006388   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:59.076914   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:59.081067   10286 system_pods.go:86] 20 kube-system pods found
	I1219 02:25:59.081104   10286 system_pods.go:89] "amd-gpu-device-plugin-j2hvw" [0bdaaa68-7ce6-41e2-8ef0-41bbe9cc8cbf] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1219 02:25:59.081114   10286 system_pods.go:89] "coredns-66bc5c9577-w88lw" [4b95f3bb-adda-464d-acf4-50575055445a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 02:25:59.081122   10286 system_pods.go:89] "csi-hostpath-attacher-0" [e0fda27e-619f-40ee-a16b-42b17bf27f67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1219 02:25:59.081131   10286 system_pods.go:89] "csi-hostpath-resizer-0" [8d778eab-a0e4-47a8-a6de-29401f6be9a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1219 02:25:59.081149   10286 system_pods.go:89] "csi-hostpathplugin-stf22" [937a1596-c0e8-4cfd-acc7-e0cb17aadc8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1219 02:25:59.081156   10286 system_pods.go:89] "etcd-addons-791857" [686fcc6a-7965-4304-8858-41d979a138fc] Running
	I1219 02:25:59.081163   10286 system_pods.go:89] "kindnet-hdbwg" [7018db27-6306-4e23-8dc9-bf41d6198195] Running
	I1219 02:25:59.081168   10286 system_pods.go:89] "kube-apiserver-addons-791857" [f5ce6454-689d-4a6a-b221-cc7263395975] Running
	I1219 02:25:59.081173   10286 system_pods.go:89] "kube-controller-manager-addons-791857" [68702782-1743-429b-a9dc-94c63b062f7a] Running
	I1219 02:25:59.081183   10286 system_pods.go:89] "kube-ingress-dns-minikube" [c27bf96b-ebf9-426c-b1ba-385d03cbd356] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1219 02:25:59.081188   10286 system_pods.go:89] "kube-proxy-7g9j9" [b514b755-f501-4933-8719-43c3fdfc2d33] Running
	I1219 02:25:59.081194   10286 system_pods.go:89] "kube-scheduler-addons-791857" [053aa4f2-f0c7-4742-acc9-ca29276ca941] Running
	I1219 02:25:59.081204   10286 system_pods.go:89] "metrics-server-85b7d694d7-dnphb" [4e7c7cf4-66fc-44e5-b383-c17b49201f15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 02:25:59.081212   10286 system_pods.go:89] "nvidia-device-plugin-daemonset-9ngs4" [e76215e0-394e-4006-8d6b-bc14b485dc1f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1219 02:25:59.081220   10286 system_pods.go:89] "registry-6b586f9694-j2n8x" [2570cf1e-ddd4-4270-8adc-05df916b18a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1219 02:25:59.081227   10286 system_pods.go:89] "registry-creds-764b6fb674-xdlrg" [5bd93e94-0edf-490d-a978-5aa0fe38b999] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1219 02:25:59.081234   10286 system_pods.go:89] "registry-proxy-wsz68" [1af55b65-71a7-4559-b943-5db589afbf6f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1219 02:25:59.081243   10286 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d42wc" [13df303d-c677-4d6f-9291-bfd820bf6b74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:59.081252   10286 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v6xz5" [d91f0d4f-4cb4-4cf1-a698-688320a8623d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:59.081260   10286 system_pods.go:89] "storage-provisioner" [048f74f6-54f2-4dc5-9893-45ede6f7b3fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 02:25:59.081276   10286 retry.go:31] will retry after 472.328916ms: missing components: kube-dns
	I1219 02:25:59.252576   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:59.499484   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:25:59.499804   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:25:59.558399   10286 system_pods.go:86] 20 kube-system pods found
	I1219 02:25:59.558440   10286 system_pods.go:89] "amd-gpu-device-plugin-j2hvw" [0bdaaa68-7ce6-41e2-8ef0-41bbe9cc8cbf] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1219 02:25:59.558452   10286 system_pods.go:89] "coredns-66bc5c9577-w88lw" [4b95f3bb-adda-464d-acf4-50575055445a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 02:25:59.558463   10286 system_pods.go:89] "csi-hostpath-attacher-0" [e0fda27e-619f-40ee-a16b-42b17bf27f67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1219 02:25:59.558471   10286 system_pods.go:89] "csi-hostpath-resizer-0" [8d778eab-a0e4-47a8-a6de-29401f6be9a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1219 02:25:59.558481   10286 system_pods.go:89] "csi-hostpathplugin-stf22" [937a1596-c0e8-4cfd-acc7-e0cb17aadc8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1219 02:25:59.558487   10286 system_pods.go:89] "etcd-addons-791857" [686fcc6a-7965-4304-8858-41d979a138fc] Running
	I1219 02:25:59.558493   10286 system_pods.go:89] "kindnet-hdbwg" [7018db27-6306-4e23-8dc9-bf41d6198195] Running
	I1219 02:25:59.558499   10286 system_pods.go:89] "kube-apiserver-addons-791857" [f5ce6454-689d-4a6a-b221-cc7263395975] Running
	I1219 02:25:59.558504   10286 system_pods.go:89] "kube-controller-manager-addons-791857" [68702782-1743-429b-a9dc-94c63b062f7a] Running
	I1219 02:25:59.558513   10286 system_pods.go:89] "kube-ingress-dns-minikube" [c27bf96b-ebf9-426c-b1ba-385d03cbd356] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1219 02:25:59.558524   10286 system_pods.go:89] "kube-proxy-7g9j9" [b514b755-f501-4933-8719-43c3fdfc2d33] Running
	I1219 02:25:59.558530   10286 system_pods.go:89] "kube-scheduler-addons-791857" [053aa4f2-f0c7-4742-acc9-ca29276ca941] Running
	I1219 02:25:59.558542   10286 system_pods.go:89] "metrics-server-85b7d694d7-dnphb" [4e7c7cf4-66fc-44e5-b383-c17b49201f15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 02:25:59.558552   10286 system_pods.go:89] "nvidia-device-plugin-daemonset-9ngs4" [e76215e0-394e-4006-8d6b-bc14b485dc1f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1219 02:25:59.558564   10286 system_pods.go:89] "registry-6b586f9694-j2n8x" [2570cf1e-ddd4-4270-8adc-05df916b18a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1219 02:25:59.558572   10286 system_pods.go:89] "registry-creds-764b6fb674-xdlrg" [5bd93e94-0edf-490d-a978-5aa0fe38b999] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1219 02:25:59.558579   10286 system_pods.go:89] "registry-proxy-wsz68" [1af55b65-71a7-4559-b943-5db589afbf6f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1219 02:25:59.558587   10286 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d42wc" [13df303d-c677-4d6f-9291-bfd820bf6b74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:59.558603   10286 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v6xz5" [d91f0d4f-4cb4-4cf1-a698-688320a8623d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:59.558616   10286 system_pods.go:89] "storage-provisioner" [048f74f6-54f2-4dc5-9893-45ede6f7b3fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 02:25:59.558633   10286 retry.go:31] will retry after 389.981082ms: missing components: kube-dns
	I1219 02:25:59.577129   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:25:59.751868   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:25:59.953387   10286 system_pods.go:86] 20 kube-system pods found
	I1219 02:25:59.953428   10286 system_pods.go:89] "amd-gpu-device-plugin-j2hvw" [0bdaaa68-7ce6-41e2-8ef0-41bbe9cc8cbf] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1219 02:25:59.953436   10286 system_pods.go:89] "coredns-66bc5c9577-w88lw" [4b95f3bb-adda-464d-acf4-50575055445a] Running
	I1219 02:25:59.953448   10286 system_pods.go:89] "csi-hostpath-attacher-0" [e0fda27e-619f-40ee-a16b-42b17bf27f67] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1219 02:25:59.953471   10286 system_pods.go:89] "csi-hostpath-resizer-0" [8d778eab-a0e4-47a8-a6de-29401f6be9a6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1219 02:25:59.953484   10286 system_pods.go:89] "csi-hostpathplugin-stf22" [937a1596-c0e8-4cfd-acc7-e0cb17aadc8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1219 02:25:59.953491   10286 system_pods.go:89] "etcd-addons-791857" [686fcc6a-7965-4304-8858-41d979a138fc] Running
	I1219 02:25:59.953502   10286 system_pods.go:89] "kindnet-hdbwg" [7018db27-6306-4e23-8dc9-bf41d6198195] Running
	I1219 02:25:59.953508   10286 system_pods.go:89] "kube-apiserver-addons-791857" [f5ce6454-689d-4a6a-b221-cc7263395975] Running
	I1219 02:25:59.953514   10286 system_pods.go:89] "kube-controller-manager-addons-791857" [68702782-1743-429b-a9dc-94c63b062f7a] Running
	I1219 02:25:59.953526   10286 system_pods.go:89] "kube-ingress-dns-minikube" [c27bf96b-ebf9-426c-b1ba-385d03cbd356] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1219 02:25:59.953531   10286 system_pods.go:89] "kube-proxy-7g9j9" [b514b755-f501-4933-8719-43c3fdfc2d33] Running
	I1219 02:25:59.953537   10286 system_pods.go:89] "kube-scheduler-addons-791857" [053aa4f2-f0c7-4742-acc9-ca29276ca941] Running
	I1219 02:25:59.953545   10286 system_pods.go:89] "metrics-server-85b7d694d7-dnphb" [4e7c7cf4-66fc-44e5-b383-c17b49201f15] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 02:25:59.953552   10286 system_pods.go:89] "nvidia-device-plugin-daemonset-9ngs4" [e76215e0-394e-4006-8d6b-bc14b485dc1f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1219 02:25:59.953558   10286 system_pods.go:89] "registry-6b586f9694-j2n8x" [2570cf1e-ddd4-4270-8adc-05df916b18a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1219 02:25:59.953563   10286 system_pods.go:89] "registry-creds-764b6fb674-xdlrg" [5bd93e94-0edf-490d-a978-5aa0fe38b999] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1219 02:25:59.953568   10286 system_pods.go:89] "registry-proxy-wsz68" [1af55b65-71a7-4559-b943-5db589afbf6f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1219 02:25:59.953581   10286 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d42wc" [13df303d-c677-4d6f-9291-bfd820bf6b74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:59.953592   10286 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v6xz5" [d91f0d4f-4cb4-4cf1-a698-688320a8623d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1219 02:25:59.953598   10286 system_pods.go:89] "storage-provisioner" [048f74f6-54f2-4dc5-9893-45ede6f7b3fc] Running
	I1219 02:25:59.953611   10286 system_pods.go:126] duration metric: took 1.51227941s to wait for k8s-apps to be running ...
	I1219 02:25:59.953625   10286 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 02:25:59.953678   10286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 02:25:59.970998   10286 system_svc.go:56] duration metric: took 17.366101ms WaitForService to wait for kubelet
	I1219 02:25:59.971032   10286 kubeadm.go:587] duration metric: took 14.728176114s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 02:25:59.971055   10286 node_conditions.go:102] verifying NodePressure condition ...
	I1219 02:25:59.974526   10286 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 02:25:59.974559   10286 node_conditions.go:123] node cpu capacity is 8
	I1219 02:25:59.974579   10286 node_conditions.go:105] duration metric: took 3.517868ms to run NodePressure ...
	I1219 02:25:59.974593   10286 start.go:242] waiting for startup goroutines ...
	I1219 02:26:00.000141   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:00.000141   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:00.076185   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:00.251781   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:00.499091   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:00.499101   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:00.576464   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:00.752043   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:00.999068   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:00.999102   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:01.076649   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:01.251904   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:01.498930   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:01.498949   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:01.577020   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:01.750800   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:02.015127   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:02.015223   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:02.115688   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:02.251760   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:02.499932   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:02.500174   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:02.576821   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:02.751117   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:02.999252   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:02.999390   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:03.076099   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:03.251663   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:03.499990   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:03.500097   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:03.600751   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:03.751459   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:03.999869   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:04.000010   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:04.076471   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:04.251517   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:04.498889   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:04.499094   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:04.575279   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:04.800060   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:04.999128   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:04.999244   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:05.099907   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:05.250486   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:05.500380   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:05.500409   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:05.576132   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:05.750955   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:05.999209   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:05.999307   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:06.076312   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:06.251400   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:06.499342   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:06.499445   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:06.575947   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:06.750393   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:06.999769   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:06.999772   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:07.076167   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:07.251443   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:07.501760   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:07.501858   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:07.577008   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:07.750895   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:08.000789   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:08.000815   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:08.076785   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:08.252076   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:08.499352   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:08.499461   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:08.575998   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:08.843625   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:09.018275   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:09.018593   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:09.118046   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:09.251440   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:09.499906   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:09.500076   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:09.577176   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:09.751975   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:10.000415   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:10.000537   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:10.076345   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:10.251354   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:10.499557   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:10.499767   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:10.576312   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:10.751271   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:10.999593   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:10.999592   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:11.076415   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:11.251530   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:11.500211   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:11.500390   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:11.576753   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:11.751254   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:11.999384   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:11.999547   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:12.100011   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:12.251181   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:12.499955   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:12.499983   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:12.576404   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:12.751172   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:12.999587   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:12.999693   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:13.076596   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:13.251656   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:13.500593   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:13.500738   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:13.576580   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:13.751655   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:14.000588   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:14.000761   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:14.102605   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:14.250637   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:14.498515   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:14.498523   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:14.575987   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:14.751065   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:14.999392   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:14.999420   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:15.099693   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:15.251410   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:15.499918   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:15.500056   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:15.576403   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:15.751550   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:16.000138   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:16.000148   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:16.076345   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:16.252254   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:16.499108   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:16.499143   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:16.576569   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:16.751848   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:17.001255   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:17.001316   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:17.076287   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:17.250960   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:17.499151   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:17.499222   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:17.575568   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:17.751377   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:17.999645   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:17.999683   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:18.075824   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:18.250382   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:18.499954   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:18.499976   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:18.576797   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:18.750490   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:19.004611   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:19.004952   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:19.105320   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:19.251387   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:19.499807   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:19.500045   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:19.576401   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:19.751486   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:20.000100   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:20.000108   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:20.076804   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:20.251265   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:20.499372   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:20.499406   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:20.576033   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:20.750808   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:20.998952   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:20.998996   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:21.100110   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:21.250871   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:21.498863   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:21.498898   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:21.576330   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:21.751376   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:21.998827   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:21.998828   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:22.076339   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:22.251230   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:22.500084   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:22.500102   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:22.576624   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:22.751636   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1219 02:26:23.000488   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:23.000825   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:23.076792   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:23.252250   10286 kapi.go:107] duration metric: took 36.504476539s to wait for kubernetes.io/minikube-addons=registry ...
	I1219 02:26:23.499386   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:23.500084   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:23.576666   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:23.999821   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:23.999952   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:24.076286   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:24.500286   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:24.500422   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:24.576006   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:24.999294   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:24.999530   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:25.098936   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:25.499176   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:25.499280   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:25.575888   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:25.999461   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:25.999461   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:26.075503   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:26.499267   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:26.499404   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:26.575581   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:27.000240   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:27.000256   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:27.076484   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:27.498943   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:27.498982   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:27.576647   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:27.999889   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:27.999892   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:28.076936   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:28.499298   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:28.499334   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1219 02:26:28.575654   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:28.999274   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:28.999276   10286 kapi.go:107] duration metric: took 41.503606159s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1219 02:26:29.075441   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:29.500004   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:29.576763   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:29.999898   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:30.100968   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:30.499400   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:30.576234   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:31.000185   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:31.079107   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:31.499680   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:31.576400   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:32.000112   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:32.076613   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:32.499949   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:32.576489   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:33.000224   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:33.075924   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:33.500781   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:33.576115   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:33.999454   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:34.099985   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:34.499278   10286 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1219 02:26:34.575750   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:34.999884   10286 kapi.go:107] duration metric: took 47.50401219s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1219 02:26:35.076308   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:35.576577   10286 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1219 02:26:36.075466   10286 kapi.go:107] duration metric: took 42.502549881s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1219 02:26:36.076920   10286 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-791857 cluster.
	I1219 02:26:36.078108   10286 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1219 02:26:36.079217   10286 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1219 02:26:36.080346   10286 out.go:179] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, registry-creds, ingress-dns, nvidia-device-plugin, storage-provisioner, default-storageclass, yakd, metrics-server, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1219 02:26:36.082203   10286 addons.go:546] duration metric: took 50.839312711s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner registry-creds ingress-dns nvidia-device-plugin storage-provisioner default-storageclass yakd metrics-server inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1219 02:26:36.082247   10286 start.go:247] waiting for cluster config update ...
	I1219 02:26:36.082272   10286 start.go:256] writing updated cluster config ...
	I1219 02:26:36.082506   10286 ssh_runner.go:195] Run: rm -f paused
	I1219 02:26:36.086366   10286 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 02:26:36.089074   10286 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w88lw" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:36.092645   10286 pod_ready.go:94] pod "coredns-66bc5c9577-w88lw" is "Ready"
	I1219 02:26:36.092662   10286 pod_ready.go:86] duration metric: took 3.569548ms for pod "coredns-66bc5c9577-w88lw" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:36.094198   10286 pod_ready.go:83] waiting for pod "etcd-addons-791857" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:36.097235   10286 pod_ready.go:94] pod "etcd-addons-791857" is "Ready"
	I1219 02:26:36.097253   10286 pod_ready.go:86] duration metric: took 3.03834ms for pod "etcd-addons-791857" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:36.098892   10286 pod_ready.go:83] waiting for pod "kube-apiserver-addons-791857" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:36.101997   10286 pod_ready.go:94] pod "kube-apiserver-addons-791857" is "Ready"
	I1219 02:26:36.102017   10286 pod_ready.go:86] duration metric: took 3.103653ms for pod "kube-apiserver-addons-791857" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:36.103542   10286 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-791857" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:36.490223   10286 pod_ready.go:94] pod "kube-controller-manager-addons-791857" is "Ready"
	I1219 02:26:36.490251   10286 pod_ready.go:86] duration metric: took 386.690084ms for pod "kube-controller-manager-addons-791857" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:36.690225   10286 pod_ready.go:83] waiting for pod "kube-proxy-7g9j9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:37.089844   10286 pod_ready.go:94] pod "kube-proxy-7g9j9" is "Ready"
	I1219 02:26:37.089868   10286 pod_ready.go:86] duration metric: took 399.618352ms for pod "kube-proxy-7g9j9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:37.289909   10286 pod_ready.go:83] waiting for pod "kube-scheduler-addons-791857" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:37.690288   10286 pod_ready.go:94] pod "kube-scheduler-addons-791857" is "Ready"
	I1219 02:26:37.690314   10286 pod_ready.go:86] duration metric: took 400.378337ms for pod "kube-scheduler-addons-791857" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 02:26:37.690326   10286 pod_ready.go:40] duration metric: took 1.603940629s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 02:26:37.732861   10286 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 02:26:37.734671   10286 out.go:179] * Done! kubectl is now configured to use "addons-791857" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 02:26:40 addons-791857 crio[771]: time="2025-12-19T02:26:40.273297988Z" level=info msg="Starting container: 5f9c339e2a099252079065313f4598ca7cb8c463636bab95a8c5d6eaf851bc05" id=8d31a8cd-60b7-4c38-af3c-7a9f4dd268f8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 02:26:40 addons-791857 crio[771]: time="2025-12-19T02:26:40.27534855Z" level=info msg="Started container" PID=6302 containerID=5f9c339e2a099252079065313f4598ca7cb8c463636bab95a8c5d6eaf851bc05 description=default/busybox/busybox id=8d31a8cd-60b7-4c38-af3c-7a9f4dd268f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2d94480efc53ed8069b0d07cbbca622b2cbff45159a20cbffc9a7c2fe6bb5705
	Dec 19 02:26:48 addons-791857 crio[771]: time="2025-12-19T02:26:48.607662939Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c/POD" id=675c49fd-181e-4a94-bee4-8e4f45fdfcc1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 19 02:26:48 addons-791857 crio[771]: time="2025-12-19T02:26:48.60778084Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 02:26:48 addons-791857 crio[771]: time="2025-12-19T02:26:48.615009378Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c Namespace:local-path-storage ID:efb616dd5d1bf2e6355be865ef75a0015c7e69e9ebe0eaa94c6fc2d398dc60d0 UID:190ea07d-e706-4fed-beae-0e23662d058c NetNS:/var/run/netns/91b4ed98-6673-4264-8c15-12aa72e5d3ac Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005c2c00}] Aliases:map[]}"
	Dec 19 02:26:48 addons-791857 crio[771]: time="2025-12-19T02:26:48.615058599Z" level=info msg="Adding pod local-path-storage_helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c to CNI network \"kindnet\" (type=ptp)"
	Dec 19 02:26:48 addons-791857 crio[771]: time="2025-12-19T02:26:48.625972104Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c Namespace:local-path-storage ID:efb616dd5d1bf2e6355be865ef75a0015c7e69e9ebe0eaa94c6fc2d398dc60d0 UID:190ea07d-e706-4fed-beae-0e23662d058c NetNS:/var/run/netns/91b4ed98-6673-4264-8c15-12aa72e5d3ac Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005c2c00}] Aliases:map[]}"
	Dec 19 02:26:48 addons-791857 crio[771]: time="2025-12-19T02:26:48.626168678Z" level=info msg="Checking pod local-path-storage_helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c for CNI network kindnet (type=ptp)"
	Dec 19 02:26:48 addons-791857 crio[771]: time="2025-12-19T02:26:48.627539224Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 19 02:26:48 addons-791857 crio[771]: time="2025-12-19T02:26:48.628955595Z" level=info msg="Ran pod sandbox efb616dd5d1bf2e6355be865ef75a0015c7e69e9ebe0eaa94c6fc2d398dc60d0 with infra container: local-path-storage/helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c/POD" id=675c49fd-181e-4a94-bee4-8e4f45fdfcc1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 19 02:26:48 addons-791857 crio[771]: time="2025-12-19T02:26:48.630213254Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=331a77b7-511f-4da6-9731-d5018f7fb0a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 02:26:48 addons-791857 crio[771]: time="2025-12-19T02:26:48.630368229Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=331a77b7-511f-4da6-9731-d5018f7fb0a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 02:26:48 addons-791857 crio[771]: time="2025-12-19T02:26:48.630405716Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=331a77b7-511f-4da6-9731-d5018f7fb0a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 02:26:48 addons-791857 crio[771]: time="2025-12-19T02:26:48.631027623Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=17731bdc-e23b-4bb4-85a8-008965c0acab name=/runtime.v1.ImageService/PullImage
	Dec 19 02:26:48 addons-791857 crio[771]: time="2025-12-19T02:26:48.639685788Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Dec 19 02:26:49 addons-791857 crio[771]: time="2025-12-19T02:26:49.123478726Z" level=info msg="Pulled image: docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee" id=17731bdc-e23b-4bb4-85a8-008965c0acab name=/runtime.v1.ImageService/PullImage
	Dec 19 02:26:49 addons-791857 crio[771]: time="2025-12-19T02:26:49.12412543Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=05ceb26f-bedd-4d64-bd2f-ce57b460ea15 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 02:26:49 addons-791857 crio[771]: time="2025-12-19T02:26:49.125969666Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=2c4d2d6e-2a1c-408d-b1c8-17c4f27ee47b name=/runtime.v1.ImageService/ImageStatus
	Dec 19 02:26:49 addons-791857 crio[771]: time="2025-12-19T02:26:49.130020142Z" level=info msg="Creating container: local-path-storage/helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c/helper-pod" id=eaf2962a-968a-4a80-9604-03b1aaeb40b1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 02:26:49 addons-791857 crio[771]: time="2025-12-19T02:26:49.130138256Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 02:26:49 addons-791857 crio[771]: time="2025-12-19T02:26:49.136162907Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 02:26:49 addons-791857 crio[771]: time="2025-12-19T02:26:49.13665091Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 02:26:49 addons-791857 crio[771]: time="2025-12-19T02:26:49.172219819Z" level=info msg="Created container 585e89d2c14a84afa49e6925a604d3ece12ab5eaf0c75bd9b1eb319bad89c378: local-path-storage/helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c/helper-pod" id=eaf2962a-968a-4a80-9604-03b1aaeb40b1 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 02:26:49 addons-791857 crio[771]: time="2025-12-19T02:26:49.173015299Z" level=info msg="Starting container: 585e89d2c14a84afa49e6925a604d3ece12ab5eaf0c75bd9b1eb319bad89c378" id=fa7cc0b4-7a8f-4799-a50b-2638bf7a650c name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 02:26:49 addons-791857 crio[771]: time="2025-12-19T02:26:49.175423004Z" level=info msg="Started container" PID=6568 containerID=585e89d2c14a84afa49e6925a604d3ece12ab5eaf0c75bd9b1eb319bad89c378 description=local-path-storage/helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c/helper-pod id=fa7cc0b4-7a8f-4799-a50b-2638bf7a650c name=/runtime.v1.RuntimeService/StartContainer sandboxID=efb616dd5d1bf2e6355be865ef75a0015c7e69e9ebe0eaa94c6fc2d398dc60d0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	585e89d2c14a8       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            Less than a second ago   Exited              helper-pod                               0                   efb616dd5d1bf       helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c   local-path-storage
	5f9c339e2a099       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          9 seconds ago            Running             busybox                                  0                   2d94480efc53e       busybox                                                      default
	9ddd01031bdbf       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 13 seconds ago           Running             gcp-auth                                 0                   c2d8b53d07769       gcp-auth-78565c9fb4-6bmz4                                    gcp-auth
	18463208a59b6       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             15 seconds ago           Running             controller                               0                   f6b27f69afa7f       ingress-nginx-controller-85d4c799dd-qmd9h                    ingress-nginx
	0ab375c325b23       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             19 seconds ago           Exited              patch                                    2                   e9e559cc42e5f       ingress-nginx-admission-patch-kl62v                          ingress-nginx
	e7ab741310c71       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          21 seconds ago           Running             csi-snapshotter                          0                   50887a9e8812c       csi-hostpathplugin-stf22                                     kube-system
	96a4c77bc9411       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          22 seconds ago           Running             csi-provisioner                          0                   50887a9e8812c       csi-hostpathplugin-stf22                                     kube-system
	3b4d17ba42562       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            23 seconds ago           Running             liveness-probe                           0                   50887a9e8812c       csi-hostpathplugin-stf22                                     kube-system
	3da71007b4d24       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           23 seconds ago           Running             hostpath                                 0                   50887a9e8812c       csi-hostpathplugin-stf22                                     kube-system
	464d91d87ed3b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                24 seconds ago           Running             node-driver-registrar                    0                   50887a9e8812c       csi-hostpathplugin-stf22                                     kube-system
	2af2c2fc8740d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            25 seconds ago           Running             gadget                                   0                   9300ee68707ba       gadget-j5dvh                                                 gadget
	3952448da55ae       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              27 seconds ago           Running             registry-proxy                           0                   6f1c59bad19dd       registry-proxy-wsz68                                         kube-system
	1bb6f09e7568e       nvcr.io/nvidia/k8s-device-plugin@sha256:c3c1a099015d1810c249ba294beaad656ce0354f7e8a77803dacabe60a4f8c9f                                     29 seconds ago           Running             nvidia-device-plugin-ctr                 0                   b0e343d8bbba4       nvidia-device-plugin-daemonset-9ngs4                         kube-system
	889116c0a9d40       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     31 seconds ago           Running             amd-gpu-device-plugin                    0                   57c70d41b9de3       amd-gpu-device-plugin-j2hvw                                  kube-system
	258da604e725d       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      32 seconds ago           Running             volume-snapshot-controller               0                   76a5e0e088a99       snapshot-controller-7d9fbc56b8-v6xz5                         kube-system
	0576bfa9d823e       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   32 seconds ago           Running             csi-external-health-monitor-controller   0                   50887a9e8812c       csi-hostpathplugin-stf22                                     kube-system
	3d53a5d1fdc9e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   34 seconds ago           Exited              patch                                    0                   687a383896d3a       gcp-auth-certs-patch-p9npg                                   gcp-auth
	c136b6a814e13       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   34 seconds ago           Exited              create                                   0                   656b522040e3a       gcp-auth-certs-create-49s2r                                  gcp-auth
	97ca5f9b244b7       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      34 seconds ago           Running             volume-snapshot-controller               0                   821953c6b09fe       snapshot-controller-7d9fbc56b8-d42wc                         kube-system
	fc1c8efae8677       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   35 seconds ago           Exited              create                                   0                   5ee9003edf82a       ingress-nginx-admission-create-l2d6q                         ingress-nginx
	6784d80b9a465       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             35 seconds ago           Running             csi-attacher                             0                   6e6e9d2fbbc60       csi-hostpath-attacher-0                                      kube-system
	88c063c48a86c       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              36 seconds ago           Running             csi-resizer                              0                   db2f968a6a8b0       csi-hostpath-resizer-0                                       kube-system
	05ef0f62db0a2       docker.io/marcnuri/yakd@sha256:ef51bed688eb0feab1405f97b7286dfe1da3c61e5a189ce4ae34a90c9f9cf8aa                                              38 seconds ago           Running             yakd                                     0                   96fc102eaeef6       yakd-dashboard-6654c87f9b-b29t5                              yakd-dashboard
	5b8bfe2727c13       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               40 seconds ago           Running             cloud-spanner-emulator                   0                   ae3b7d3174f99       cloud-spanner-emulator-5bdddb765-jb86j                       default
	5a1a1413ec4ac       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        43 seconds ago           Running             metrics-server                           0                   0b444eb95c520       metrics-server-85b7d694d7-dnphb                              kube-system
	8d463ddbcc194       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           44 seconds ago           Running             registry                                 0                   114e345f99551       registry-6b586f9694-j2n8x                                    kube-system
	bebca1e94f189       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             45 seconds ago           Running             local-path-provisioner                   0                   9ab70674944c8       local-path-provisioner-648f6765c9-ld25w                      local-path-storage
	0775e7ddec4bd       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               46 seconds ago           Running             minikube-ingress-dns                     0                   deb2352024842       kube-ingress-dns-minikube                                    kube-system
	9de7849d09931       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             50 seconds ago           Running             coredns                                  0                   46b74af9d5ce9       coredns-66bc5c9577-w88lw                                     kube-system
	bbe76e37f22c4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             50 seconds ago           Running             storage-provisioner                      0                   649155a47504b       storage-provisioner                                          kube-system
	483e903265e32       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27                                           About a minute ago       Running             kindnet-cni                              0                   ce82743c073c2       kindnet-hdbwg                                                kube-system
	a51f3eaf36dba       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                                             About a minute ago       Running             kube-proxy                               0                   416ad8c57c1f1       kube-proxy-7g9j9                                             kube-system
	fcf4200f75e68       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                                             About a minute ago       Running             kube-apiserver                           0                   7ca4f5eb9a73f       kube-apiserver-addons-791857                                 kube-system
	73473c7fc9bbe       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago       Running             etcd                                     0                   ec784dde985be       etcd-addons-791857                                           kube-system
	e6fd793aa75bd       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                                             About a minute ago       Running             kube-scheduler                           0                   043acd3d662a9       kube-scheduler-addons-791857                                 kube-system
	8cc6b1da7c4c1       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                                             About a minute ago       Running             kube-controller-manager                  0                   241a3a9bbbcf3       kube-controller-manager-addons-791857                        kube-system
	
	
	==> coredns [9de7849d09931e8545cb193c0c133e4f48a2478e312f66c751d66daac9199580] <==
	[INFO] 10.244.0.15:57537 - 65395 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0001057s
	[INFO] 10.244.0.15:58193 - 5480 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081234s
	[INFO] 10.244.0.15:58193 - 5745 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00014523s
	[INFO] 10.244.0.15:57981 - 31362 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.00010433s
	[INFO] 10.244.0.15:57981 - 31010 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000147168s
	[INFO] 10.244.0.15:51253 - 8570 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000056546s
	[INFO] 10.244.0.15:51253 - 8255 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000098116s
	[INFO] 10.244.0.15:41930 - 48116 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000067517s
	[INFO] 10.244.0.15:41930 - 48396 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000112862s
	[INFO] 10.244.0.15:52268 - 40057 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000086947s
	[INFO] 10.244.0.15:52268 - 40300 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000145455s
	[INFO] 10.244.0.22:37315 - 44417 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000228988s
	[INFO] 10.244.0.22:39010 - 51077 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000287071s
	[INFO] 10.244.0.22:60419 - 34471 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000112481s
	[INFO] 10.244.0.22:59088 - 51427 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000169286s
	[INFO] 10.244.0.22:35692 - 44533 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000145093s
	[INFO] 10.244.0.22:39260 - 27058 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00021054s
	[INFO] 10.244.0.22:34545 - 37237 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.006180661s
	[INFO] 10.244.0.22:45092 - 51111 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.008953081s
	[INFO] 10.244.0.22:38130 - 47986 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006741378s
	[INFO] 10.244.0.22:44659 - 21460 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006859595s
	[INFO] 10.244.0.22:45059 - 49278 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00637689s
	[INFO] 10.244.0.22:36221 - 65074 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006525682s
	[INFO] 10.244.0.22:39091 - 13716 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001227658s
	[INFO] 10.244.0.22:44227 - 25729 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002682026s
	
	
	==> describe nodes <==
	Name:               addons-791857
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-791857
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=addons-791857
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T02_25_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-791857
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-791857"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 02:25:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-791857
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 02:26:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 02:26:40 +0000   Fri, 19 Dec 2025 02:25:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 02:26:40 +0000   Fri, 19 Dec 2025 02:25:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 02:26:40 +0000   Fri, 19 Dec 2025 02:25:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 02:26:40 +0000   Fri, 19 Dec 2025 02:25:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-791857
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                75c2a887-6e79-49d2-accf-6fefcc720450
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-5bdddb765-jb86j                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  gadget                      gadget-j5dvh                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  gcp-auth                    gcp-auth-78565c9fb4-6bmz4                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-qmd9h                     100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         63s
	  kube-system                 amd-gpu-device-plugin-j2hvw                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 coredns-66bc5c9577-w88lw                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     64s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 csi-hostpathplugin-stf22                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 etcd-addons-791857                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         70s
	  kube-system                 kindnet-hdbwg                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      64s
	  kube-system                 kube-apiserver-addons-791857                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-controller-manager-addons-791857                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-7g9j9                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-scheduler-addons-791857                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 metrics-server-85b7d694d7-dnphb                               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         63s
	  kube-system                 nvidia-device-plugin-daemonset-9ngs4                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 registry-6b586f9694-j2n8x                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 registry-creds-764b6fb674-xdlrg                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 registry-proxy-wsz68                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 snapshot-controller-7d9fbc56b8-d42wc                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 snapshot-controller-7d9fbc56b8-v6xz5                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  local-path-storage          helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  local-path-storage          local-path-provisioner-648f6765c9-ld25w                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  yakd-dashboard              yakd-dashboard-6654c87f9b-b29t5                               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     63s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 62s                kube-proxy       
	  Normal  Starting                 75s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  74s (x8 over 75s)  kubelet          Node addons-791857 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s (x8 over 75s)  kubelet          Node addons-791857 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s (x8 over 75s)  kubelet          Node addons-791857 status is now: NodeHasSufficientPID
	  Normal  Starting                 70s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  70s                kubelet          Node addons-791857 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s                kubelet          Node addons-791857 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s                kubelet          Node addons-791857 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           65s                node-controller  Node addons-791857 event: Registered Node addons-791857 in Controller
	  Normal  NodeReady                51s                kubelet          Node addons-791857 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec19 02:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001836] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087018] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.399596] i8042: Warning: Keylock active
	[  +0.010496] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.491260] block sda: the capability attribute has been deprecated.
	[  +0.091115] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025741] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.646270] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [73473c7fc9bbe226d61054cd86b0c64ebb4b011155ac700760284f1b9be79ac4] <==
	{"level":"warn","ts":"2025-12-19T02:25:36.917931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:36.923911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:36.930057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:36.936296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:36.942529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:36.949011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:36.961335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:36.967989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:36.975765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:36.983039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:36.994919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:37.001224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:37.007888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:47.950848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:25:47.957984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45416","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T02:26:08.976378Z","caller":"traceutil/trace.go:172","msg":"trace[1967062358] transaction","detail":"{read_only:false; response_revision:1001; number_of_response:1; }","duration":"128.79707ms","start":"2025-12-19T02:26:08.847558Z","end":"2025-12-19T02:26:08.976355Z","steps":["trace[1967062358] 'process raft request'  (duration: 110.719115ms)","trace[1967062358] 'compare'  (duration: 17.994964ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T02:26:09.013062Z","caller":"traceutil/trace.go:172","msg":"trace[1349280637] transaction","detail":"{read_only:false; response_revision:1004; number_of_response:1; }","duration":"163.883085ms","start":"2025-12-19T02:26:08.849172Z","end":"2025-12-19T02:26:09.013055Z","steps":["trace[1349280637] 'process raft request'  (duration: 163.833558ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:26:09.013081Z","caller":"traceutil/trace.go:172","msg":"trace[1662932122] transaction","detail":"{read_only:false; response_revision:1003; number_of_response:1; }","duration":"164.094898ms","start":"2025-12-19T02:26:08.848968Z","end":"2025-12-19T02:26:09.013063Z","steps":["trace[1662932122] 'process raft request'  (duration: 164.001974ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:26:09.013043Z","caller":"traceutil/trace.go:172","msg":"trace[218474860] transaction","detail":"{read_only:false; response_revision:1002; number_of_response:1; }","duration":"165.455319ms","start":"2025-12-19T02:26:08.847567Z","end":"2025-12-19T02:26:09.013023Z","steps":["trace[218474860] 'process raft request'  (duration: 165.305184ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:26:14.436321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:26:14.445295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:26:14.471069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:26:14.479054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52152","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T02:26:38.525237Z","caller":"traceutil/trace.go:172","msg":"trace[924737974] transaction","detail":"{read_only:false; response_revision:1239; number_of_response:1; }","duration":"104.853859ms","start":"2025-12-19T02:26:38.420365Z","end":"2025-12-19T02:26:38.525219Z","steps":["trace[924737974] 'process raft request'  (duration: 104.81566ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:26:38.525247Z","caller":"traceutil/trace.go:172","msg":"trace[1045968563] transaction","detail":"{read_only:false; response_revision:1238; number_of_response:1; }","duration":"157.002694ms","start":"2025-12-19T02:26:38.368229Z","end":"2025-12-19T02:26:38.525232Z","steps":["trace[1045968563] 'process raft request'  (duration: 132.360987ms)","trace[1045968563] 'compare'  (duration: 24.476047ms)"],"step_count":2}
	
	
	==> gcp-auth [9ddd01031bdbf0666aae205c610b63776c66380347b086a2704c9e17e86f1d33] <==
	2025/12/19 02:26:35 GCP Auth Webhook started!
	2025/12/19 02:26:38 Ready to marshal response ...
	2025/12/19 02:26:38 Ready to write response ...
	2025/12/19 02:26:38 Ready to marshal response ...
	2025/12/19 02:26:38 Ready to write response ...
	2025/12/19 02:26:38 Ready to marshal response ...
	2025/12/19 02:26:38 Ready to write response ...
	2025/12/19 02:26:48 Ready to marshal response ...
	2025/12/19 02:26:48 Ready to write response ...
	2025/12/19 02:26:48 Ready to marshal response ...
	2025/12/19 02:26:48 Ready to write response ...
	
	
	==> kernel <==
	 02:26:49 up 9 min,  0 user,  load average: 1.76, 0.72, 0.27
	Linux addons-791857 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [483e903265e322c82b7cfeb6c2cd6fdfb8900212d86fc292511093c09ce04d07] <==
	I1219 02:25:47.844764       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1219 02:25:47.844948       1 main.go:148] setting mtu 1500 for CNI 
	I1219 02:25:47.844981       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 02:25:47.845011       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T02:25:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 02:25:48.048466       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 02:25:48.048502       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 02:25:48.048514       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 02:25:48.048690       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 02:25:48.449518       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 02:25:48.449543       1 metrics.go:72] Registering metrics
	I1219 02:25:48.449615       1 controller.go:711] "Syncing nftables rules"
	I1219 02:25:58.049267       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:25:58.049308       1 main.go:301] handling current node
	I1219 02:26:08.048849       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:26:08.048909       1 main.go:301] handling current node
	I1219 02:26:18.048863       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:26:18.048913       1 main.go:301] handling current node
	I1219 02:26:28.049094       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:26:28.049138       1 main.go:301] handling current node
	I1219 02:26:38.048781       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:26:38.048828       1 main.go:301] handling current node
	I1219 02:26:48.048640       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:26:48.048781       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fcf4200f75e689843ddc922c6d57acdb381cebd9891fa659a6990df808c09ea5] <==
	 > logger="UnhandledError"
	E1219 02:26:09.020899       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.203.231:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.203.231:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.203.231:443: connect: connection refused" logger="UnhandledError"
	E1219 02:26:09.021966       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.203.231:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.203.231:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.203.231:443: connect: connection refused" logger="UnhandledError"
	W1219 02:26:10.020811       1 handler_proxy.go:99] no RequestInfo found in the context
	W1219 02:26:10.020829       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 02:26:10.021012       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 02:26:10.021062       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1219 02:26:10.021013       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 02:26:10.022204       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 02:26:14.031817       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 02:26:14.031867       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1219 02:26:14.031880       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.203.231:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.203.231:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1219 02:26:14.042313       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 02:26:14.436234       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:26:14.444975       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:26:14.465395       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 02:26:14.478836       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	E1219 02:26:47.617544       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37810: use of closed network connection
	E1219 02:26:47.755212       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37824: use of closed network connection
	
	
	==> kube-controller-manager [8cc6b1da7c4c1244ed3e67c38c6672283870866dfd11de667780d4667c5ce702] <==
	I1219 02:25:44.419483       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1219 02:25:44.419624       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-791857"
	I1219 02:25:44.419677       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1219 02:25:44.419748       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1219 02:25:44.419810       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1219 02:25:44.420365       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1219 02:25:44.420379       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1219 02:25:44.420366       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1219 02:25:44.420497       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 02:25:44.420589       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1219 02:25:44.420676       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 02:25:44.421605       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1219 02:25:44.421754       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1219 02:25:44.422638       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 02:25:44.423745       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 02:25:44.423746       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 02:25:44.428481       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1219 02:25:44.439216       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1219 02:25:46.549330       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1219 02:25:59.422839       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1219 02:26:14.429133       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1219 02:26:14.429202       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1219 02:26:14.456992       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1219 02:26:14.529985       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 02:26:14.558196       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [a51f3eaf36dba0ef9d223354007cff8f566a886628eed2756e88a76fb84d455f] <==
	I1219 02:25:46.356225       1 server_linux.go:53] "Using iptables proxy"
	I1219 02:25:46.577032       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 02:25:46.679262       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 02:25:46.679305       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1219 02:25:46.679402       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:25:46.757988       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 02:25:46.758058       1 server_linux.go:132] "Using iptables Proxier"
	I1219 02:25:46.765698       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:25:46.772192       1 server.go:527] "Version info" version="v1.34.3"
	I1219 02:25:46.772371       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:25:46.780273       1 config.go:200] "Starting service config controller"
	I1219 02:25:46.780293       1 config.go:309] "Starting node config controller"
	I1219 02:25:46.780306       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:25:46.780308       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:25:46.780314       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 02:25:46.780295       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:25:46.780327       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:25:46.780334       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:25:46.780316       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:25:46.880516       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 02:25:46.880515       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 02:25:46.880582       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e6fd793aa75bd52aacacfead162878e4b1933f59937ea6ddfc7b0e7c0084361a] <==
	E1219 02:25:37.435178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 02:25:37.435270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 02:25:37.435304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 02:25:37.435348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1219 02:25:37.435377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1219 02:25:37.435380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 02:25:37.435402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1219 02:25:37.435452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 02:25:37.435507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1219 02:25:37.435524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 02:25:37.435534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1219 02:25:37.435597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1219 02:25:37.435688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 02:25:37.435731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 02:25:37.435794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 02:25:38.241559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 02:25:38.306778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1219 02:25:38.323967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 02:25:38.355388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 02:25:38.436128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 02:25:38.453364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 02:25:38.517460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 02:25:38.523549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 02:25:38.529692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1219 02:25:39.027501       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 02:26:23 addons-791857 kubelet[1285]: I1219 02:26:23.815987    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-wsz68" secret="" err="secret \"gcp-auth\" not found"
	Dec 19 02:26:24 addons-791857 kubelet[1285]: I1219 02:26:24.836915    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-j5dvh" podStartSLOduration=17.312537943 podStartE2EDuration="38.836896772s" podCreationTimestamp="2025-12-19 02:25:46 +0000 UTC" firstStartedPulling="2025-12-19 02:26:02.837305111 +0000 UTC m=+23.307045690" lastFinishedPulling="2025-12-19 02:26:24.36166394 +0000 UTC m=+44.831404519" observedRunningTime="2025-12-19 02:26:24.835761415 +0000 UTC m=+45.305502017" watchObservedRunningTime="2025-12-19 02:26:24.836896772 +0000 UTC m=+45.306637367"
	Dec 19 02:26:26 addons-791857 kubelet[1285]: I1219 02:26:26.645573    1285 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 19 02:26:26 addons-791857 kubelet[1285]: I1219 02:26:26.645624    1285 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 19 02:26:28 addons-791857 kubelet[1285]: I1219 02:26:28.859820    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-stf22" podStartSLOduration=1.3990024220000001 podStartE2EDuration="30.859800088s" podCreationTimestamp="2025-12-19 02:25:58 +0000 UTC" firstStartedPulling="2025-12-19 02:25:58.680587358 +0000 UTC m=+19.150327946" lastFinishedPulling="2025-12-19 02:26:28.141385036 +0000 UTC m=+48.611125612" observedRunningTime="2025-12-19 02:26:28.859287396 +0000 UTC m=+49.329027992" watchObservedRunningTime="2025-12-19 02:26:28.859800088 +0000 UTC m=+49.329540685"
	Dec 19 02:26:29 addons-791857 kubelet[1285]: I1219 02:26:29.614123    1285 scope.go:117] "RemoveContainer" containerID="3264b1c181f86ade755b6b3d8b9da21a7161e5d2062b1fb066ebf9ccf7fe34df"
	Dec 19 02:26:30 addons-791857 kubelet[1285]: E1219 02:26:30.124058    1285 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 19 02:26:30 addons-791857 kubelet[1285]: E1219 02:26:30.124156    1285 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bd93e94-0edf-490d-a978-5aa0fe38b999-gcr-creds podName:5bd93e94-0edf-490d-a978-5aa0fe38b999 nodeName:}" failed. No retries permitted until 2025-12-19 02:27:02.124137715 +0000 UTC m=+82.593878306 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/5bd93e94-0edf-490d-a978-5aa0fe38b999-gcr-creds") pod "registry-creds-764b6fb674-xdlrg" (UID: "5bd93e94-0edf-490d-a978-5aa0fe38b999") : secret "registry-creds-gcr" not found
	Dec 19 02:26:30 addons-791857 kubelet[1285]: I1219 02:26:30.858611    1285 scope.go:117] "RemoveContainer" containerID="3264b1c181f86ade755b6b3d8b9da21a7161e5d2062b1fb066ebf9ccf7fe34df"
	Dec 19 02:26:32 addons-791857 kubelet[1285]: I1219 02:26:32.242775    1285 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p6hw\" (UniqueName: \"kubernetes.io/projected/c8b1efb1-e767-47e7-ab55-bcfbdb6639e2-kube-api-access-8p6hw\") pod \"c8b1efb1-e767-47e7-ab55-bcfbdb6639e2\" (UID: \"c8b1efb1-e767-47e7-ab55-bcfbdb6639e2\") "
	Dec 19 02:26:32 addons-791857 kubelet[1285]: I1219 02:26:32.245380    1285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b1efb1-e767-47e7-ab55-bcfbdb6639e2-kube-api-access-8p6hw" (OuterVolumeSpecName: "kube-api-access-8p6hw") pod "c8b1efb1-e767-47e7-ab55-bcfbdb6639e2" (UID: "c8b1efb1-e767-47e7-ab55-bcfbdb6639e2"). InnerVolumeSpecName "kube-api-access-8p6hw". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 19 02:26:32 addons-791857 kubelet[1285]: I1219 02:26:32.343777    1285 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8p6hw\" (UniqueName: \"kubernetes.io/projected/c8b1efb1-e767-47e7-ab55-bcfbdb6639e2-kube-api-access-8p6hw\") on node \"addons-791857\" DevicePath \"\""
	Dec 19 02:26:32 addons-791857 kubelet[1285]: I1219 02:26:32.874907    1285 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9e559cc42e5f20ea00d59c265888670042ff19a0ef16d1f94b60f7ece0960bb"
	Dec 19 02:26:34 addons-791857 kubelet[1285]: I1219 02:26:34.901360    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-qmd9h" podStartSLOduration=45.478899542 podStartE2EDuration="48.901340591s" podCreationTimestamp="2025-12-19 02:25:46 +0000 UTC" firstStartedPulling="2025-12-19 02:26:30.414605391 +0000 UTC m=+50.884345982" lastFinishedPulling="2025-12-19 02:26:33.837046452 +0000 UTC m=+54.306787031" observedRunningTime="2025-12-19 02:26:34.900992023 +0000 UTC m=+55.370732617" watchObservedRunningTime="2025-12-19 02:26:34.901340591 +0000 UTC m=+55.371081188"
	Dec 19 02:26:35 addons-791857 kubelet[1285]: I1219 02:26:35.899940    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-6bmz4" podStartSLOduration=37.764815688 podStartE2EDuration="42.899921749s" podCreationTimestamp="2025-12-19 02:25:53 +0000 UTC" firstStartedPulling="2025-12-19 02:26:30.419829956 +0000 UTC m=+50.889570532" lastFinishedPulling="2025-12-19 02:26:35.554936013 +0000 UTC m=+56.024676593" observedRunningTime="2025-12-19 02:26:35.899557345 +0000 UTC m=+56.369297945" watchObservedRunningTime="2025-12-19 02:26:35.899921749 +0000 UTC m=+56.369662349"
	Dec 19 02:26:38 addons-791857 kubelet[1285]: I1219 02:26:38.690803    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7rjs\" (UniqueName: \"kubernetes.io/projected/3fb02b19-4a11-4f81-8f32-b9969dbce522-kube-api-access-h7rjs\") pod \"busybox\" (UID: \"3fb02b19-4a11-4f81-8f32-b9969dbce522\") " pod="default/busybox"
	Dec 19 02:26:38 addons-791857 kubelet[1285]: I1219 02:26:38.690865    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3fb02b19-4a11-4f81-8f32-b9969dbce522-gcp-creds\") pod \"busybox\" (UID: \"3fb02b19-4a11-4f81-8f32-b9969dbce522\") " pod="default/busybox"
	Dec 19 02:26:40 addons-791857 kubelet[1285]: I1219 02:26:40.922446    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.631870089 podStartE2EDuration="2.922428745s" podCreationTimestamp="2025-12-19 02:26:38 +0000 UTC" firstStartedPulling="2025-12-19 02:26:38.943655227 +0000 UTC m=+59.413395815" lastFinishedPulling="2025-12-19 02:26:40.234213881 +0000 UTC m=+60.703954471" observedRunningTime="2025-12-19 02:26:40.92167192 +0000 UTC m=+61.391412517" watchObservedRunningTime="2025-12-19 02:26:40.922428745 +0000 UTC m=+61.392169342"
	Dec 19 02:26:47 addons-791857 kubelet[1285]: I1219 02:26:47.615230    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e815fa4-c55e-4f95-a698-d237b9fc0dd1" path="/var/lib/kubelet/pods/9e815fa4-c55e-4f95-a698-d237b9fc0dd1/volumes"
	Dec 19 02:26:47 addons-791857 kubelet[1285]: I1219 02:26:47.615637    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1104943-364e-4024-926b-da706a336d01" path="/var/lib/kubelet/pods/b1104943-364e-4024-926b-da706a336d01/volumes"
	Dec 19 02:26:47 addons-791857 kubelet[1285]: E1219 02:26:47.755189    1285 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:51768->127.0.0.1:41765: write tcp 127.0.0.1:51768->127.0.0.1:41765: write: broken pipe
	Dec 19 02:26:48 addons-791857 kubelet[1285]: I1219 02:26:48.366377    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/190ea07d-e706-4fed-beae-0e23662d058c-data\") pod \"helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c\" (UID: \"190ea07d-e706-4fed-beae-0e23662d058c\") " pod="local-path-storage/helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c"
	Dec 19 02:26:48 addons-791857 kubelet[1285]: I1219 02:26:48.366464    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmtpn\" (UniqueName: \"kubernetes.io/projected/190ea07d-e706-4fed-beae-0e23662d058c-kube-api-access-qmtpn\") pod \"helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c\" (UID: \"190ea07d-e706-4fed-beae-0e23662d058c\") " pod="local-path-storage/helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c"
	Dec 19 02:26:48 addons-791857 kubelet[1285]: I1219 02:26:48.366520    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/190ea07d-e706-4fed-beae-0e23662d058c-script\") pod \"helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c\" (UID: \"190ea07d-e706-4fed-beae-0e23662d058c\") " pod="local-path-storage/helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c"
	Dec 19 02:26:48 addons-791857 kubelet[1285]: I1219 02:26:48.366549    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/190ea07d-e706-4fed-beae-0e23662d058c-gcp-creds\") pod \"helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c\" (UID: \"190ea07d-e706-4fed-beae-0e23662d058c\") " pod="local-path-storage/helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c"
	
	
	==> storage-provisioner [bbe76e37f22c483601a1b3e1967ff9fb850315576f8e7e47a90c7ed1bab593f3] <==
	W1219 02:26:25.083637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:27.087369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:27.091592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:29.094688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:29.098104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:31.102003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:31.107902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:33.110914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:33.115171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:35.121125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:35.125542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:37.128255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:37.131733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:39.134891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:39.138761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:41.142111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:41.146570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:43.149231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:43.153691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:45.156572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:45.160963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:47.164245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:47.167631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:49.171113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:26:49.176289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-791857 -n addons-791857
helpers_test.go:270: (dbg) Run:  kubectl --context addons-791857 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: test-local-path ingress-nginx-admission-create-l2d6q ingress-nginx-admission-patch-kl62v registry-creds-764b6fb674-xdlrg helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-791857 describe pod test-local-path ingress-nginx-admission-create-l2d6q ingress-nginx-admission-patch-kl62v registry-creds-764b6fb674-xdlrg helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-791857 describe pod test-local-path ingress-nginx-admission-create-l2d6q ingress-nginx-admission-patch-kl62v registry-creds-764b6fb674-xdlrg helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c: exit status 1 (68.065675ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mklml (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-mklml:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-l2d6q" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kl62v" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-xdlrg" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-791857 describe pod test-local-path ingress-nginx-admission-create-l2d6q ingress-nginx-admission-patch-kl62v registry-creds-764b6fb674-xdlrg helper-pod-create-pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-791857 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-791857 addons disable headlamp --alsologtostderr -v=1: exit status 11 (247.729164ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:26:50.435515   19421 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:26:50.435835   19421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:26:50.435846   19421 out.go:374] Setting ErrFile to fd 2...
	I1219 02:26:50.435853   19421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:26:50.436074   19421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:26:50.436415   19421 mustload.go:66] Loading cluster: addons-791857
	I1219 02:26:50.436788   19421 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:26:50.436811   19421 addons.go:638] checking whether the cluster is paused
	I1219 02:26:50.436909   19421 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:26:50.436925   19421 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:26:50.437296   19421 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:26:50.455755   19421 ssh_runner.go:195] Run: systemctl --version
	I1219 02:26:50.455822   19421 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:26:50.473935   19421 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:26:50.574540   19421 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 02:26:50.574648   19421 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 02:26:50.604003   19421 cri.go:92] found id: "e7ab741310c7159227dc4626e72a83ac3b188e511b198b4eecbe1fbfa72a6a1d"
	I1219 02:26:50.604029   19421 cri.go:92] found id: "96a4c77bc9411da812fbbdb05f10e7f4626e9117b182160f7f703c6ca69da438"
	I1219 02:26:50.604035   19421 cri.go:92] found id: "3b4d17ba4256243dc128f083e9b674110dcf80d83f680893f8d1560a63f34ce8"
	I1219 02:26:50.604040   19421 cri.go:92] found id: "3da71007b4d24d06267e77306cc089ffb24dd26395c3cf25a087bebddec346df"
	I1219 02:26:50.604044   19421 cri.go:92] found id: "464d91d87ed3b2836ea8bd02787437d0615fd0b3715165642f0adfcf6734796a"
	I1219 02:26:50.604056   19421 cri.go:92] found id: "3952448da55ae48b025692fb6527671c951ceba9c3201a0cfc26c58f34736160"
	I1219 02:26:50.604068   19421 cri.go:92] found id: "1bb6f09e7568eb35fc4a1fa4554a64e2a8a2dc5103e913d143cf2351ffea0912"
	I1219 02:26:50.604072   19421 cri.go:92] found id: "889116c0a9d403ff95107f15b2cb4bc324e04429f5963a9814e1ead83b3ec0d5"
	I1219 02:26:50.604077   19421 cri.go:92] found id: "258da604e725d4407e60a6c8389099d1393afa4e245dedbe89d9ec3b45b908c6"
	I1219 02:26:50.604085   19421 cri.go:92] found id: "0576bfa9d823e389e837b502601bc350a20a0ef1b326b80a5a3bdff07282f675"
	I1219 02:26:50.604092   19421 cri.go:92] found id: "97ca5f9b244b723133d2ee7e9cf88c9ff3e0669e3f5144dc9d65f11e422589b0"
	I1219 02:26:50.604096   19421 cri.go:92] found id: "6784d80b9a46542b3e645c6e8d4b04def8668221cabc6ecdcfad0991c2fec046"
	I1219 02:26:50.604105   19421 cri.go:92] found id: "88c063c48a86c4d6b60c5e780eedfbcaaece7ef28eef4a4eed50297457b2f0cd"
	I1219 02:26:50.604109   19421 cri.go:92] found id: "5a1a1413ec4ac23fec4e18501c1dae0c954c686f19b103ea385a88927ea6b4dc"
	I1219 02:26:50.604116   19421 cri.go:92] found id: "8d463ddbcc194b758faa79280521c0dc3091a84ae8b1daed1462125c0f30a12d"
	I1219 02:26:50.604124   19421 cri.go:92] found id: "0775e7ddec4bddf3bf1f8c6146bfa87a31eab965b4cd4b9cdcb81072b3c193df"
	I1219 02:26:50.604128   19421 cri.go:92] found id: "9de7849d09931e8545cb193c0c133e4f48a2478e312f66c751d66daac9199580"
	I1219 02:26:50.604134   19421 cri.go:92] found id: "bbe76e37f22c483601a1b3e1967ff9fb850315576f8e7e47a90c7ed1bab593f3"
	I1219 02:26:50.604138   19421 cri.go:92] found id: "483e903265e322c82b7cfeb6c2cd6fdfb8900212d86fc292511093c09ce04d07"
	I1219 02:26:50.604142   19421 cri.go:92] found id: "a51f3eaf36dba0ef9d223354007cff8f566a886628eed2756e88a76fb84d455f"
	I1219 02:26:50.604147   19421 cri.go:92] found id: "fcf4200f75e689843ddc922c6d57acdb381cebd9891fa659a6990df808c09ea5"
	I1219 02:26:50.604151   19421 cri.go:92] found id: "73473c7fc9bbe226d61054cd86b0c64ebb4b011155ac700760284f1b9be79ac4"
	I1219 02:26:50.604158   19421 cri.go:92] found id: "e6fd793aa75bd52aacacfead162878e4b1933f59937ea6ddfc7b0e7c0084361a"
	I1219 02:26:50.604163   19421 cri.go:92] found id: "8cc6b1da7c4c1244ed3e67c38c6672283870866dfd11de667780d4667c5ce702"
	I1219 02:26:50.604170   19421 cri.go:92] found id: ""
	I1219 02:26:50.604224   19421 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 02:26:50.618839   19421 out.go:203] 
	W1219 02:26:50.619937   19421 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:26:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:26:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 02:26:50.619957   19421 out.go:285] * 
	* 
	W1219 02:26:50.623133   19421 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 02:26:50.624182   19421 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-791857 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.61s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-jb86j" [23132239-f9b1-45c5-98c7-dc094af0ffc4] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002607933s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-791857 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-791857 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (247.429467ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:27:08.898294   21835 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:27:08.898600   21835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:27:08.898610   21835 out.go:374] Setting ErrFile to fd 2...
	I1219 02:27:08.898614   21835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:27:08.898816   21835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:27:08.899108   21835 mustload.go:66] Loading cluster: addons-791857
	I1219 02:27:08.899494   21835 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:27:08.899516   21835 addons.go:638] checking whether the cluster is paused
	I1219 02:27:08.899612   21835 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:27:08.899629   21835 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:27:08.900050   21835 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:27:08.918108   21835 ssh_runner.go:195] Run: systemctl --version
	I1219 02:27:08.918164   21835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:27:08.935762   21835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:27:09.035629   21835 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 02:27:09.035691   21835 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 02:27:09.065582   21835 cri.go:92] found id: "82083cc2b8ec2b9f35c8877a2b88e8140201a847ce5fcc112fb8edde1bd778a9"
	I1219 02:27:09.065608   21835 cri.go:92] found id: "e7ab741310c7159227dc4626e72a83ac3b188e511b198b4eecbe1fbfa72a6a1d"
	I1219 02:27:09.065614   21835 cri.go:92] found id: "96a4c77bc9411da812fbbdb05f10e7f4626e9117b182160f7f703c6ca69da438"
	I1219 02:27:09.065619   21835 cri.go:92] found id: "3b4d17ba4256243dc128f083e9b674110dcf80d83f680893f8d1560a63f34ce8"
	I1219 02:27:09.065624   21835 cri.go:92] found id: "3da71007b4d24d06267e77306cc089ffb24dd26395c3cf25a087bebddec346df"
	I1219 02:27:09.065630   21835 cri.go:92] found id: "464d91d87ed3b2836ea8bd02787437d0615fd0b3715165642f0adfcf6734796a"
	I1219 02:27:09.065634   21835 cri.go:92] found id: "3952448da55ae48b025692fb6527671c951ceba9c3201a0cfc26c58f34736160"
	I1219 02:27:09.065638   21835 cri.go:92] found id: "1bb6f09e7568eb35fc4a1fa4554a64e2a8a2dc5103e913d143cf2351ffea0912"
	I1219 02:27:09.065643   21835 cri.go:92] found id: "889116c0a9d403ff95107f15b2cb4bc324e04429f5963a9814e1ead83b3ec0d5"
	I1219 02:27:09.065651   21835 cri.go:92] found id: "258da604e725d4407e60a6c8389099d1393afa4e245dedbe89d9ec3b45b908c6"
	I1219 02:27:09.065656   21835 cri.go:92] found id: "0576bfa9d823e389e837b502601bc350a20a0ef1b326b80a5a3bdff07282f675"
	I1219 02:27:09.065661   21835 cri.go:92] found id: "97ca5f9b244b723133d2ee7e9cf88c9ff3e0669e3f5144dc9d65f11e422589b0"
	I1219 02:27:09.065666   21835 cri.go:92] found id: "6784d80b9a46542b3e645c6e8d4b04def8668221cabc6ecdcfad0991c2fec046"
	I1219 02:27:09.065671   21835 cri.go:92] found id: "88c063c48a86c4d6b60c5e780eedfbcaaece7ef28eef4a4eed50297457b2f0cd"
	I1219 02:27:09.065685   21835 cri.go:92] found id: "5a1a1413ec4ac23fec4e18501c1dae0c954c686f19b103ea385a88927ea6b4dc"
	I1219 02:27:09.065692   21835 cri.go:92] found id: "8d463ddbcc194b758faa79280521c0dc3091a84ae8b1daed1462125c0f30a12d"
	I1219 02:27:09.065697   21835 cri.go:92] found id: "0775e7ddec4bddf3bf1f8c6146bfa87a31eab965b4cd4b9cdcb81072b3c193df"
	I1219 02:27:09.065717   21835 cri.go:92] found id: "9de7849d09931e8545cb193c0c133e4f48a2478e312f66c751d66daac9199580"
	I1219 02:27:09.065722   21835 cri.go:92] found id: "bbe76e37f22c483601a1b3e1967ff9fb850315576f8e7e47a90c7ed1bab593f3"
	I1219 02:27:09.065726   21835 cri.go:92] found id: "483e903265e322c82b7cfeb6c2cd6fdfb8900212d86fc292511093c09ce04d07"
	I1219 02:27:09.065731   21835 cri.go:92] found id: "a51f3eaf36dba0ef9d223354007cff8f566a886628eed2756e88a76fb84d455f"
	I1219 02:27:09.065738   21835 cri.go:92] found id: "fcf4200f75e689843ddc922c6d57acdb381cebd9891fa659a6990df808c09ea5"
	I1219 02:27:09.065751   21835 cri.go:92] found id: "73473c7fc9bbe226d61054cd86b0c64ebb4b011155ac700760284f1b9be79ac4"
	I1219 02:27:09.065756   21835 cri.go:92] found id: "e6fd793aa75bd52aacacfead162878e4b1933f59937ea6ddfc7b0e7c0084361a"
	I1219 02:27:09.065761   21835 cri.go:92] found id: "8cc6b1da7c4c1244ed3e67c38c6672283870866dfd11de667780d4667c5ce702"
	I1219 02:27:09.065766   21835 cri.go:92] found id: ""
	I1219 02:27:09.065812   21835 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 02:27:09.082102   21835 out.go:203] 
	W1219 02:27:09.083216   21835 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:27:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:27:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 02:27:09.083257   21835 out.go:285] * 
	* 
	W1219 02:27:09.088136   21835 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 02:27:09.089338   21835 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-791857 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-791857 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-791857 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-791857 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-791857 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-791857 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-791857 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-791857 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [613743a7-cee1-4498-92ea-235330baf125] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [613743a7-cee1-4498-92ea-235330baf125] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [613743a7-cee1-4498-92ea-235330baf125] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.002499359s
addons_test.go:969: (dbg) Run:  kubectl --context addons-791857 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-791857 ssh "cat /opt/local-path-provisioner/pvc-a65ac7d9-c295-4429-a4ea-55718aa6e02c_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-791857 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-791857 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-791857 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-791857 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (262.237789ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:26:55.949576   19903 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:26:55.949755   19903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:26:55.949765   19903 out.go:374] Setting ErrFile to fd 2...
	I1219 02:26:55.949769   19903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:26:55.949996   19903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:26:55.950275   19903 mustload.go:66] Loading cluster: addons-791857
	I1219 02:26:55.950585   19903 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:26:55.950608   19903 addons.go:638] checking whether the cluster is paused
	I1219 02:26:55.950731   19903 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:26:55.950757   19903 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:26:55.951190   19903 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:26:55.972715   19903 ssh_runner.go:195] Run: systemctl --version
	I1219 02:26:55.972784   19903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:26:55.992922   19903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:26:56.095224   19903 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 02:26:56.095339   19903 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 02:26:56.125296   19903 cri.go:92] found id: "e7ab741310c7159227dc4626e72a83ac3b188e511b198b4eecbe1fbfa72a6a1d"
	I1219 02:26:56.125317   19903 cri.go:92] found id: "96a4c77bc9411da812fbbdb05f10e7f4626e9117b182160f7f703c6ca69da438"
	I1219 02:26:56.125320   19903 cri.go:92] found id: "3b4d17ba4256243dc128f083e9b674110dcf80d83f680893f8d1560a63f34ce8"
	I1219 02:26:56.125324   19903 cri.go:92] found id: "3da71007b4d24d06267e77306cc089ffb24dd26395c3cf25a087bebddec346df"
	I1219 02:26:56.125327   19903 cri.go:92] found id: "464d91d87ed3b2836ea8bd02787437d0615fd0b3715165642f0adfcf6734796a"
	I1219 02:26:56.125330   19903 cri.go:92] found id: "3952448da55ae48b025692fb6527671c951ceba9c3201a0cfc26c58f34736160"
	I1219 02:26:56.125333   19903 cri.go:92] found id: "1bb6f09e7568eb35fc4a1fa4554a64e2a8a2dc5103e913d143cf2351ffea0912"
	I1219 02:26:56.125336   19903 cri.go:92] found id: "889116c0a9d403ff95107f15b2cb4bc324e04429f5963a9814e1ead83b3ec0d5"
	I1219 02:26:56.125338   19903 cri.go:92] found id: "258da604e725d4407e60a6c8389099d1393afa4e245dedbe89d9ec3b45b908c6"
	I1219 02:26:56.125349   19903 cri.go:92] found id: "0576bfa9d823e389e837b502601bc350a20a0ef1b326b80a5a3bdff07282f675"
	I1219 02:26:56.125352   19903 cri.go:92] found id: "97ca5f9b244b723133d2ee7e9cf88c9ff3e0669e3f5144dc9d65f11e422589b0"
	I1219 02:26:56.125355   19903 cri.go:92] found id: "6784d80b9a46542b3e645c6e8d4b04def8668221cabc6ecdcfad0991c2fec046"
	I1219 02:26:56.125358   19903 cri.go:92] found id: "88c063c48a86c4d6b60c5e780eedfbcaaece7ef28eef4a4eed50297457b2f0cd"
	I1219 02:26:56.125361   19903 cri.go:92] found id: "5a1a1413ec4ac23fec4e18501c1dae0c954c686f19b103ea385a88927ea6b4dc"
	I1219 02:26:56.125363   19903 cri.go:92] found id: "8d463ddbcc194b758faa79280521c0dc3091a84ae8b1daed1462125c0f30a12d"
	I1219 02:26:56.125370   19903 cri.go:92] found id: "0775e7ddec4bddf3bf1f8c6146bfa87a31eab965b4cd4b9cdcb81072b3c193df"
	I1219 02:26:56.125373   19903 cri.go:92] found id: "9de7849d09931e8545cb193c0c133e4f48a2478e312f66c751d66daac9199580"
	I1219 02:26:56.125377   19903 cri.go:92] found id: "bbe76e37f22c483601a1b3e1967ff9fb850315576f8e7e47a90c7ed1bab593f3"
	I1219 02:26:56.125380   19903 cri.go:92] found id: "483e903265e322c82b7cfeb6c2cd6fdfb8900212d86fc292511093c09ce04d07"
	I1219 02:26:56.125383   19903 cri.go:92] found id: "a51f3eaf36dba0ef9d223354007cff8f566a886628eed2756e88a76fb84d455f"
	I1219 02:26:56.125388   19903 cri.go:92] found id: "fcf4200f75e689843ddc922c6d57acdb381cebd9891fa659a6990df808c09ea5"
	I1219 02:26:56.125390   19903 cri.go:92] found id: "73473c7fc9bbe226d61054cd86b0c64ebb4b011155ac700760284f1b9be79ac4"
	I1219 02:26:56.125393   19903 cri.go:92] found id: "e6fd793aa75bd52aacacfead162878e4b1933f59937ea6ddfc7b0e7c0084361a"
	I1219 02:26:56.125396   19903 cri.go:92] found id: "8cc6b1da7c4c1244ed3e67c38c6672283870866dfd11de667780d4667c5ce702"
	I1219 02:26:56.125398   19903 cri.go:92] found id: ""
	I1219 02:26:56.125434   19903 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 02:26:56.142021   19903 out.go:203] 
	W1219 02:26:56.143456   19903 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:26:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:26:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 02:26:56.143482   19903 out.go:285] * 
	* 
	W1219 02:26:56.147045   19903 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 02:26:56.148508   19903 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-791857 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-9ngs4" [e76215e0-394e-4006-8d6b-bc14b485dc1f] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003211841s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-791857 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-791857 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (253.992859ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:26:53.081898   19592 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:26:53.082043   19592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:26:53.082054   19592 out.go:374] Setting ErrFile to fd 2...
	I1219 02:26:53.082067   19592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:26:53.082280   19592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:26:53.082565   19592 mustload.go:66] Loading cluster: addons-791857
	I1219 02:26:53.082878   19592 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:26:53.082896   19592 addons.go:638] checking whether the cluster is paused
	I1219 02:26:53.083007   19592 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:26:53.083028   19592 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:26:53.083483   19592 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:26:53.102173   19592 ssh_runner.go:195] Run: systemctl --version
	I1219 02:26:53.102249   19592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:26:53.121136   19592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:26:53.223685   19592 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 02:26:53.223787   19592 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 02:26:53.252761   19592 cri.go:92] found id: "e7ab741310c7159227dc4626e72a83ac3b188e511b198b4eecbe1fbfa72a6a1d"
	I1219 02:26:53.252798   19592 cri.go:92] found id: "96a4c77bc9411da812fbbdb05f10e7f4626e9117b182160f7f703c6ca69da438"
	I1219 02:26:53.252802   19592 cri.go:92] found id: "3b4d17ba4256243dc128f083e9b674110dcf80d83f680893f8d1560a63f34ce8"
	I1219 02:26:53.252807   19592 cri.go:92] found id: "3da71007b4d24d06267e77306cc089ffb24dd26395c3cf25a087bebddec346df"
	I1219 02:26:53.252810   19592 cri.go:92] found id: "464d91d87ed3b2836ea8bd02787437d0615fd0b3715165642f0adfcf6734796a"
	I1219 02:26:53.252817   19592 cri.go:92] found id: "3952448da55ae48b025692fb6527671c951ceba9c3201a0cfc26c58f34736160"
	I1219 02:26:53.252820   19592 cri.go:92] found id: "1bb6f09e7568eb35fc4a1fa4554a64e2a8a2dc5103e913d143cf2351ffea0912"
	I1219 02:26:53.252823   19592 cri.go:92] found id: "889116c0a9d403ff95107f15b2cb4bc324e04429f5963a9814e1ead83b3ec0d5"
	I1219 02:26:53.252825   19592 cri.go:92] found id: "258da604e725d4407e60a6c8389099d1393afa4e245dedbe89d9ec3b45b908c6"
	I1219 02:26:53.252835   19592 cri.go:92] found id: "0576bfa9d823e389e837b502601bc350a20a0ef1b326b80a5a3bdff07282f675"
	I1219 02:26:53.252838   19592 cri.go:92] found id: "97ca5f9b244b723133d2ee7e9cf88c9ff3e0669e3f5144dc9d65f11e422589b0"
	I1219 02:26:53.252840   19592 cri.go:92] found id: "6784d80b9a46542b3e645c6e8d4b04def8668221cabc6ecdcfad0991c2fec046"
	I1219 02:26:53.252843   19592 cri.go:92] found id: "88c063c48a86c4d6b60c5e780eedfbcaaece7ef28eef4a4eed50297457b2f0cd"
	I1219 02:26:53.252846   19592 cri.go:92] found id: "5a1a1413ec4ac23fec4e18501c1dae0c954c686f19b103ea385a88927ea6b4dc"
	I1219 02:26:53.252849   19592 cri.go:92] found id: "8d463ddbcc194b758faa79280521c0dc3091a84ae8b1daed1462125c0f30a12d"
	I1219 02:26:53.252855   19592 cri.go:92] found id: "0775e7ddec4bddf3bf1f8c6146bfa87a31eab965b4cd4b9cdcb81072b3c193df"
	I1219 02:26:53.252858   19592 cri.go:92] found id: "9de7849d09931e8545cb193c0c133e4f48a2478e312f66c751d66daac9199580"
	I1219 02:26:53.252862   19592 cri.go:92] found id: "bbe76e37f22c483601a1b3e1967ff9fb850315576f8e7e47a90c7ed1bab593f3"
	I1219 02:26:53.252865   19592 cri.go:92] found id: "483e903265e322c82b7cfeb6c2cd6fdfb8900212d86fc292511093c09ce04d07"
	I1219 02:26:53.252867   19592 cri.go:92] found id: "a51f3eaf36dba0ef9d223354007cff8f566a886628eed2756e88a76fb84d455f"
	I1219 02:26:53.252872   19592 cri.go:92] found id: "fcf4200f75e689843ddc922c6d57acdb381cebd9891fa659a6990df808c09ea5"
	I1219 02:26:53.252876   19592 cri.go:92] found id: "73473c7fc9bbe226d61054cd86b0c64ebb4b011155ac700760284f1b9be79ac4"
	I1219 02:26:53.252879   19592 cri.go:92] found id: "e6fd793aa75bd52aacacfead162878e4b1933f59937ea6ddfc7b0e7c0084361a"
	I1219 02:26:53.252881   19592 cri.go:92] found id: "8cc6b1da7c4c1244ed3e67c38c6672283870866dfd11de667780d4667c5ce702"
	I1219 02:26:53.252884   19592 cri.go:92] found id: ""
	I1219 02:26:53.252931   19592 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 02:26:53.266625   19592 out.go:203] 
	W1219 02:26:53.267752   19592 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:26:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:26:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 02:26:53.267775   19592 out.go:285] * 
	* 
	W1219 02:26:53.271017   19592 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 02:26:53.272237   19592 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-791857 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-b29t5" [6b1bb003-484f-4887-81bb-2cadc17dac05] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003515332s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-791857 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-791857 addons disable yakd --alsologtostderr -v=1: exit status 11 (245.707739ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:27:06.972142   21679 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:27:06.972384   21679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:27:06.972393   21679 out.go:374] Setting ErrFile to fd 2...
	I1219 02:27:06.972398   21679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:27:06.972560   21679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:27:06.972822   21679 mustload.go:66] Loading cluster: addons-791857
	I1219 02:27:06.973113   21679 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:27:06.973131   21679 addons.go:638] checking whether the cluster is paused
	I1219 02:27:06.973205   21679 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:27:06.973216   21679 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:27:06.973559   21679 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:27:06.992013   21679 ssh_runner.go:195] Run: systemctl --version
	I1219 02:27:06.992073   21679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:27:07.010429   21679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:27:07.110638   21679 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 02:27:07.110728   21679 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 02:27:07.138992   21679 cri.go:92] found id: "82083cc2b8ec2b9f35c8877a2b88e8140201a847ce5fcc112fb8edde1bd778a9"
	I1219 02:27:07.139020   21679 cri.go:92] found id: "e7ab741310c7159227dc4626e72a83ac3b188e511b198b4eecbe1fbfa72a6a1d"
	I1219 02:27:07.139026   21679 cri.go:92] found id: "96a4c77bc9411da812fbbdb05f10e7f4626e9117b182160f7f703c6ca69da438"
	I1219 02:27:07.139031   21679 cri.go:92] found id: "3b4d17ba4256243dc128f083e9b674110dcf80d83f680893f8d1560a63f34ce8"
	I1219 02:27:07.139035   21679 cri.go:92] found id: "3da71007b4d24d06267e77306cc089ffb24dd26395c3cf25a087bebddec346df"
	I1219 02:27:07.139039   21679 cri.go:92] found id: "464d91d87ed3b2836ea8bd02787437d0615fd0b3715165642f0adfcf6734796a"
	I1219 02:27:07.139042   21679 cri.go:92] found id: "3952448da55ae48b025692fb6527671c951ceba9c3201a0cfc26c58f34736160"
	I1219 02:27:07.139045   21679 cri.go:92] found id: "1bb6f09e7568eb35fc4a1fa4554a64e2a8a2dc5103e913d143cf2351ffea0912"
	I1219 02:27:07.139047   21679 cri.go:92] found id: "889116c0a9d403ff95107f15b2cb4bc324e04429f5963a9814e1ead83b3ec0d5"
	I1219 02:27:07.139061   21679 cri.go:92] found id: "258da604e725d4407e60a6c8389099d1393afa4e245dedbe89d9ec3b45b908c6"
	I1219 02:27:07.139067   21679 cri.go:92] found id: "0576bfa9d823e389e837b502601bc350a20a0ef1b326b80a5a3bdff07282f675"
	I1219 02:27:07.139069   21679 cri.go:92] found id: "97ca5f9b244b723133d2ee7e9cf88c9ff3e0669e3f5144dc9d65f11e422589b0"
	I1219 02:27:07.139072   21679 cri.go:92] found id: "6784d80b9a46542b3e645c6e8d4b04def8668221cabc6ecdcfad0991c2fec046"
	I1219 02:27:07.139075   21679 cri.go:92] found id: "88c063c48a86c4d6b60c5e780eedfbcaaece7ef28eef4a4eed50297457b2f0cd"
	I1219 02:27:07.139078   21679 cri.go:92] found id: "5a1a1413ec4ac23fec4e18501c1dae0c954c686f19b103ea385a88927ea6b4dc"
	I1219 02:27:07.139085   21679 cri.go:92] found id: "8d463ddbcc194b758faa79280521c0dc3091a84ae8b1daed1462125c0f30a12d"
	I1219 02:27:07.139088   21679 cri.go:92] found id: "0775e7ddec4bddf3bf1f8c6146bfa87a31eab965b4cd4b9cdcb81072b3c193df"
	I1219 02:27:07.139092   21679 cri.go:92] found id: "9de7849d09931e8545cb193c0c133e4f48a2478e312f66c751d66daac9199580"
	I1219 02:27:07.139095   21679 cri.go:92] found id: "bbe76e37f22c483601a1b3e1967ff9fb850315576f8e7e47a90c7ed1bab593f3"
	I1219 02:27:07.139098   21679 cri.go:92] found id: "483e903265e322c82b7cfeb6c2cd6fdfb8900212d86fc292511093c09ce04d07"
	I1219 02:27:07.139101   21679 cri.go:92] found id: "a51f3eaf36dba0ef9d223354007cff8f566a886628eed2756e88a76fb84d455f"
	I1219 02:27:07.139103   21679 cri.go:92] found id: "fcf4200f75e689843ddc922c6d57acdb381cebd9891fa659a6990df808c09ea5"
	I1219 02:27:07.139106   21679 cri.go:92] found id: "73473c7fc9bbe226d61054cd86b0c64ebb4b011155ac700760284f1b9be79ac4"
	I1219 02:27:07.139109   21679 cri.go:92] found id: "e6fd793aa75bd52aacacfead162878e4b1933f59937ea6ddfc7b0e7c0084361a"
	I1219 02:27:07.139112   21679 cri.go:92] found id: "8cc6b1da7c4c1244ed3e67c38c6672283870866dfd11de667780d4667c5ce702"
	I1219 02:27:07.139115   21679 cri.go:92] found id: ""
	I1219 02:27:07.139150   21679 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 02:27:07.153571   21679 out.go:203] 
	W1219 02:27:07.154937   21679 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:27:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:27:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 02:27:07.154954   21679 out.go:285] * 
	* 
	W1219 02:27:07.157881   21679 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 02:27:07.159098   21679 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-791857 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.25s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-j2hvw" [0bdaaa68-7ce6-41e2-8ef0-41bbe9cc8cbf] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003349098s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-791857 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-791857 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (260.363927ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:26:55.691946   19817 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:26:55.692249   19817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:26:55.692260   19817 out.go:374] Setting ErrFile to fd 2...
	I1219 02:26:55.692264   19817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:26:55.692456   19817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:26:55.692713   19817 mustload.go:66] Loading cluster: addons-791857
	I1219 02:26:55.693067   19817 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:26:55.693087   19817 addons.go:638] checking whether the cluster is paused
	I1219 02:26:55.693169   19817 config.go:182] Loaded profile config "addons-791857": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:26:55.693181   19817 host.go:66] Checking if "addons-791857" exists ...
	I1219 02:26:55.693520   19817 cli_runner.go:164] Run: docker container inspect addons-791857 --format={{.State.Status}}
	I1219 02:26:55.713312   19817 ssh_runner.go:195] Run: systemctl --version
	I1219 02:26:55.713370   19817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-791857
	I1219 02:26:55.735304   19817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/addons-791857/id_rsa Username:docker}
	I1219 02:26:55.839402   19817 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 02:26:55.839464   19817 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 02:26:55.868146   19817 cri.go:92] found id: "e7ab741310c7159227dc4626e72a83ac3b188e511b198b4eecbe1fbfa72a6a1d"
	I1219 02:26:55.868179   19817 cri.go:92] found id: "96a4c77bc9411da812fbbdb05f10e7f4626e9117b182160f7f703c6ca69da438"
	I1219 02:26:55.868183   19817 cri.go:92] found id: "3b4d17ba4256243dc128f083e9b674110dcf80d83f680893f8d1560a63f34ce8"
	I1219 02:26:55.868187   19817 cri.go:92] found id: "3da71007b4d24d06267e77306cc089ffb24dd26395c3cf25a087bebddec346df"
	I1219 02:26:55.868190   19817 cri.go:92] found id: "464d91d87ed3b2836ea8bd02787437d0615fd0b3715165642f0adfcf6734796a"
	I1219 02:26:55.868194   19817 cri.go:92] found id: "3952448da55ae48b025692fb6527671c951ceba9c3201a0cfc26c58f34736160"
	I1219 02:26:55.868197   19817 cri.go:92] found id: "1bb6f09e7568eb35fc4a1fa4554a64e2a8a2dc5103e913d143cf2351ffea0912"
	I1219 02:26:55.868199   19817 cri.go:92] found id: "889116c0a9d403ff95107f15b2cb4bc324e04429f5963a9814e1ead83b3ec0d5"
	I1219 02:26:55.868202   19817 cri.go:92] found id: "258da604e725d4407e60a6c8389099d1393afa4e245dedbe89d9ec3b45b908c6"
	I1219 02:26:55.868214   19817 cri.go:92] found id: "0576bfa9d823e389e837b502601bc350a20a0ef1b326b80a5a3bdff07282f675"
	I1219 02:26:55.868217   19817 cri.go:92] found id: "97ca5f9b244b723133d2ee7e9cf88c9ff3e0669e3f5144dc9d65f11e422589b0"
	I1219 02:26:55.868220   19817 cri.go:92] found id: "6784d80b9a46542b3e645c6e8d4b04def8668221cabc6ecdcfad0991c2fec046"
	I1219 02:26:55.868222   19817 cri.go:92] found id: "88c063c48a86c4d6b60c5e780eedfbcaaece7ef28eef4a4eed50297457b2f0cd"
	I1219 02:26:55.868225   19817 cri.go:92] found id: "5a1a1413ec4ac23fec4e18501c1dae0c954c686f19b103ea385a88927ea6b4dc"
	I1219 02:26:55.868233   19817 cri.go:92] found id: "8d463ddbcc194b758faa79280521c0dc3091a84ae8b1daed1462125c0f30a12d"
	I1219 02:26:55.868243   19817 cri.go:92] found id: "0775e7ddec4bddf3bf1f8c6146bfa87a31eab965b4cd4b9cdcb81072b3c193df"
	I1219 02:26:55.868246   19817 cri.go:92] found id: "9de7849d09931e8545cb193c0c133e4f48a2478e312f66c751d66daac9199580"
	I1219 02:26:55.868250   19817 cri.go:92] found id: "bbe76e37f22c483601a1b3e1967ff9fb850315576f8e7e47a90c7ed1bab593f3"
	I1219 02:26:55.868252   19817 cri.go:92] found id: "483e903265e322c82b7cfeb6c2cd6fdfb8900212d86fc292511093c09ce04d07"
	I1219 02:26:55.868255   19817 cri.go:92] found id: "a51f3eaf36dba0ef9d223354007cff8f566a886628eed2756e88a76fb84d455f"
	I1219 02:26:55.868258   19817 cri.go:92] found id: "fcf4200f75e689843ddc922c6d57acdb381cebd9891fa659a6990df808c09ea5"
	I1219 02:26:55.868261   19817 cri.go:92] found id: "73473c7fc9bbe226d61054cd86b0c64ebb4b011155ac700760284f1b9be79ac4"
	I1219 02:26:55.868264   19817 cri.go:92] found id: "e6fd793aa75bd52aacacfead162878e4b1933f59937ea6ddfc7b0e7c0084361a"
	I1219 02:26:55.868266   19817 cri.go:92] found id: "8cc6b1da7c4c1244ed3e67c38c6672283870866dfd11de667780d4667c5ce702"
	I1219 02:26:55.868269   19817 cri.go:92] found id: ""
	I1219 02:26:55.868330   19817 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 02:26:55.882088   19817 out.go:203] 
	W1219 02:26:55.883160   19817 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:26:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:26:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 02:26:55.883187   19817 out.go:285] * 
	* 
	W1219 02:26:55.886075   19817 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 02:26:55.887345   19817 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-791857 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.27s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-736733 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-736733 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-736733 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-736733 --alsologtostderr -v=1] stderr:
I1219 02:32:56.916095   46769 out.go:360] Setting OutFile to fd 1 ...
I1219 02:32:56.916462   46769 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:32:56.916471   46769 out.go:374] Setting ErrFile to fd 2...
I1219 02:32:56.916477   46769 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:32:56.917180   46769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
I1219 02:32:56.917568   46769 mustload.go:66] Loading cluster: functional-736733
I1219 02:32:56.918110   46769 config.go:182] Loaded profile config "functional-736733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:32:56.918752   46769 cli_runner.go:164] Run: docker container inspect functional-736733 --format={{.State.Status}}
I1219 02:32:56.941678   46769 host.go:66] Checking if "functional-736733" exists ...
I1219 02:32:56.942052   46769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1219 02:32:57.019068   46769 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-19 02:32:57.006747815 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1219 02:32:57.019184   46769 api_server.go:166] Checking apiserver status ...
I1219 02:32:57.019251   46769 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1219 02:32:57.019301   46769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-736733
I1219 02:32:57.043089   46769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/functional-736733/id_rsa Username:docker}
I1219 02:32:57.167555   46769 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4338/cgroup
W1219 02:32:57.179388   46769 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4338/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1219 02:32:57.179467   46769 ssh_runner.go:195] Run: ls
I1219 02:32:57.184948   46769 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1219 02:32:57.191755   46769 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1219 02:32:57.191807   46769 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1219 02:32:57.192011   46769 config.go:182] Loaded profile config "functional-736733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:32:57.192031   46769 addons.go:70] Setting dashboard=true in profile "functional-736733"
I1219 02:32:57.192044   46769 addons.go:239] Setting addon dashboard=true in "functional-736733"
I1219 02:32:57.192086   46769 host.go:66] Checking if "functional-736733" exists ...
I1219 02:32:57.192558   46769 cli_runner.go:164] Run: docker container inspect functional-736733 --format={{.State.Status}}
I1219 02:32:57.216761   46769 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
I1219 02:32:57.216798   46769 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
I1219 02:32:57.216898   46769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-736733
I1219 02:32:57.243244   46769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/functional-736733/id_rsa Username:docker}
I1219 02:32:57.367012   46769 ssh_runner.go:195] Run: test -f /usr/bin/helm
I1219 02:32:57.370920   46769 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
I1219 02:32:57.374660   46769 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
I1219 02:32:58.588207   46769 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.213513583s)
I1219 02:32:58.588323   46769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
I1219 02:33:01.856510   46769 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.268128641s)
I1219 02:33:01.856621   46769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
I1219 02:33:02.057151   46769 addons.go:500] Verifying addon dashboard=true in "functional-736733"
I1219 02:33:02.057492   46769 cli_runner.go:164] Run: docker container inspect functional-736733 --format={{.State.Status}}
I1219 02:33:02.080676   46769 out.go:179] * Verifying dashboard addon...
I1219 02:33:02.082820   46769 kapi.go:59] client config for functional-736733: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.key", CAFile:"/home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:33:02.083431   46769 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1219 02:33:02.083452   46769 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1219 02:33:02.083459   46769 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1219 02:33:02.083466   46769 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1219 02:33:02.083470   46769 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1219 02:33:02.083871   46769 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
I1219 02:33:02.093039   46769 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
I1219 02:33:02.093058   46769 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:02.587527   46769 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:03.087304   46769 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:03.587506   46769 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:04.087334   46769 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:04.587621   46769 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:05.087022   46769 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:05.587950   46769 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:06.087867   46769 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:06.587862   46769 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:07.087893   46769 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:07.590898   46769 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:08.098435   46769 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:08.587229   46769 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:09.087539   46769 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:09.588164   46769 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:33:10.093697   46769 kapi.go:107] duration metric: took 8.009820244s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
I1219 02:33:10.097909   46769 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-736733 addons enable metrics-server

                                                
                                                
I1219 02:33:10.099691   46769 addons.go:202] Writing out "functional-736733" config to set dashboard=true...
W1219 02:33:10.100005   46769 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1219 02:33:10.100556   46769 kapi.go:59] client config for functional-736733: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.key", CAFile:"/home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:33:10.106152   46769 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard-kong-proxy  kubernetes-dashboard  a4f42fac-c98b-4b0c-908d-15f998107c1a 819 0 2025-12-19 02:33:01 +0000 UTC <nil> <nil> map[app.kubernetes.io/instance:kubernetes-dashboard app.kubernetes.io/managed-by:Helm app.kubernetes.io/name:kong app.kubernetes.io/version:3.9 enable-metrics:true helm.sh/chart:kong-2.52.0] map[meta.helm.sh/release-name:kubernetes-dashboard meta.helm.sh/release-namespace:kubernetes-dashboard] [] [] [{helm Update v1 2025-12-19 02:33:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:app.kubernetes.io/version":{},"f:enable-metrics":{},"f:helm.sh/chart":{}}},"f:spec":{"f:externalTrafficPolicy":{},"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".
":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:kong-proxy-tls,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:31129,AppProtocol:nil,},},Selector:map[string]string{app.kubernetes.io/component: app,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/name: kong,},ClusterIP:10.111.182.66,Type:NodePort,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:Cluster,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.111.182.66],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
I1219 02:33:10.106366   46769 host.go:66] Checking if "functional-736733" exists ...
I1219 02:33:10.106669   46769 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-736733
I1219 02:33:10.141566   46769 kapi.go:59] client config for functional-736733: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.key", CAFile:"/home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:33:10.159037   46769 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:33:10.169657   46769 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:33:10.174654   46769 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:33:10.179566   46769 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:33:10.348768   46769 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:33:10.448824   46769 out.go:179] * Dashboard Token:
I1219 02:33:10.449893   46769 out.go:203] eyJhbGciOiJSUzI1NiIsImtpZCI6Ijh2MlpuMEx4U2dXQzZEWUFvYTlVWXNsRXFMX3B6YVQ1czVBVGFxM3lyejgifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzY2MTk3OTkwLCJpYXQiOjE3NjYxMTE1OTAsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiNTVjMTRkMWQtNjhiOC00NTQwLTkwMDItYjc0NGZlNjU0ZDIyIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiNDNlOGI4MTAtNWY4ZC00OWI5LWFmMjYtOTcwMDYwOTZiNzgwIn19LCJuYmYiOjE3NjYxMTE1OTAsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.dvF7WaWLsVdNvFvJrCI6USiQlq4oLPTGiaQfwrorG6qckk8CjGpthGmBIcoRwIwp1N8e3Iztxf0AzyVDHGgX3BZvAtVdFm9H-hbZoS9r24GvI8YabB0V_3wHYqxlErua0H2uRQBaBfcth4qsrEuSgf4wVjBTgtJwF3vYE_WVbtyCu2HOYmWgJrSVYaiyFhoP0lAJ7inFBE11Tzhzyd3Im5lcWN2FOjSeSLR0lRjmHd4MaPK0hpKZoT0mpp7vXwsi-v98Tpw1h9bbAuMefRpLJa0v_3z0L6gQzxxaXBxUm4XW3ccnQxP7T_Waw3gnICfYHL5DFfvye1queK2suhCabA
I1219 02:33:10.450931   46769 out.go:203] https://192.168.49.2:31129
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-736733
helpers_test.go:244: (dbg) docker inspect functional-736733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "307493b955c359ea4777808888b40972ae27699b9843000acf916e09209d517b",
	        "Created": "2025-12-19T02:30:39.242019191Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 32700,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T02:30:39.275431055Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/307493b955c359ea4777808888b40972ae27699b9843000acf916e09209d517b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/307493b955c359ea4777808888b40972ae27699b9843000acf916e09209d517b/hostname",
	        "HostsPath": "/var/lib/docker/containers/307493b955c359ea4777808888b40972ae27699b9843000acf916e09209d517b/hosts",
	        "LogPath": "/var/lib/docker/containers/307493b955c359ea4777808888b40972ae27699b9843000acf916e09209d517b/307493b955c359ea4777808888b40972ae27699b9843000acf916e09209d517b-json.log",
	        "Name": "/functional-736733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-736733:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-736733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "307493b955c359ea4777808888b40972ae27699b9843000acf916e09209d517b",
	                "LowerDir": "/var/lib/docker/overlay2/38d5b511185518676887efdd823109cb2da5e8c327a26e902b54df15d392c7fc-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/38d5b511185518676887efdd823109cb2da5e8c327a26e902b54df15d392c7fc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/38d5b511185518676887efdd823109cb2da5e8c327a26e902b54df15d392c7fc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/38d5b511185518676887efdd823109cb2da5e8c327a26e902b54df15d392c7fc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-736733",
	                "Source": "/var/lib/docker/volumes/functional-736733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-736733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-736733",
	                "name.minikube.sigs.k8s.io": "functional-736733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a4c196fa154caa5eb467efc25086a9378862e7feb0ce4b8660c2e941be509f48",
	            "SandboxKey": "/var/run/docker/netns/a4c196fa154c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-736733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bcd20725c59e93db97d43edc3da9bd22b277e57abc31f0c5b6baebecc11a98f1",
	                    "EndpointID": "15efa79630c3851273d0fa03f0ab128343942d0ad5742e2d21064d2f0130ed4c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "ea:2f:07:3c:8d:17",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-736733",
	                        "307493b955c3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-736733 -n functional-736733
helpers_test.go:253: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-736733 logs -n 25: (2.217466404s)
helpers_test.go:261: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-736733 ssh -- ls -la /mount-9p                                                                                        │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:32 UTC │ 19 Dec 25 02:32 UTC │
	│ ssh            │ functional-736733 ssh cat /mount-9p/test-1766111573744245530                                                                     │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:32 UTC │ 19 Dec 25 02:32 UTC │
	│ service        │ functional-736733 service hello-node-connect --url                                                                               │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:32 UTC │ 19 Dec 25 02:32 UTC │
	│ start          │ -p functional-736733 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                        │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:32 UTC │                     │
	│ start          │ -p functional-736733 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                  │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:32 UTC │                     │
	│ start          │ -p functional-736733 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                        │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:32 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-736733 --alsologtostderr -v=1                                                                   │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:32 UTC │ 19 Dec 25 02:33 UTC │
	│ ssh            │ functional-736733 ssh stat /mount-9p/created-by-test                                                                             │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ ssh            │ functional-736733 ssh stat /mount-9p/created-by-pod                                                                              │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ ssh            │ functional-736733 ssh sudo umount -f /mount-9p                                                                                   │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ ssh            │ functional-736733 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │                     │
	│ mount          │ -p functional-736733 /tmp/TestFunctionalparallelMountCmdspecific-port286714817/001:/mount-9p --alsologtostderr -v=1 --port 36533 │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │                     │
	│ ssh            │ functional-736733 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ ssh            │ functional-736733 ssh -- ls -la /mount-9p                                                                                        │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ ssh            │ functional-736733 ssh sudo umount -f /mount-9p                                                                                   │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │                     │
	│ update-context │ functional-736733 update-context --alsologtostderr -v=2                                                                          │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ update-context │ functional-736733 update-context --alsologtostderr -v=2                                                                          │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ mount          │ -p functional-736733 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3724702152/001:/mount3 --alsologtostderr -v=1               │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │                     │
	│ mount          │ -p functional-736733 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3724702152/001:/mount1 --alsologtostderr -v=1               │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │                     │
	│ mount          │ -p functional-736733 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3724702152/001:/mount2 --alsologtostderr -v=1               │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │                     │
	│ ssh            │ functional-736733 ssh findmnt -T /mount1                                                                                         │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │                     │
	│ update-context │ functional-736733 update-context --alsologtostderr -v=2                                                                          │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ image          │ functional-736733 image ls --format short --alsologtostderr                                                                      │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ ssh            │ functional-736733 ssh findmnt -T /mount1                                                                                         │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	│ image          │ functional-736733 image ls --format yaml --alsologtostderr                                                                       │ functional-736733 │ jenkins │ v1.37.0 │ 19 Dec 25 02:33 UTC │ 19 Dec 25 02:33 UTC │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:32:56
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:32:56.695053   46688 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:32:56.695165   46688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:32:56.695172   46688 out.go:374] Setting ErrFile to fd 2...
	I1219 02:32:56.695179   46688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:32:56.695558   46688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:32:56.696122   46688 out.go:368] Setting JSON to false
	I1219 02:32:56.697440   46688 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":928,"bootTime":1766110649,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:32:56.697521   46688 start.go:143] virtualization: kvm guest
	I1219 02:32:56.699323   46688 out.go:179] * [functional-736733] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1219 02:32:56.700970   46688 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:32:56.701009   46688 notify.go:221] Checking for updates...
	I1219 02:32:56.704165   46688 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:32:56.709341   46688 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 02:32:56.715929   46688 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 02:32:56.718306   46688 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:32:56.719963   46688 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:32:56.722067   46688 config.go:182] Loaded profile config "functional-736733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:32:56.722888   46688 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:32:56.754475   46688 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:32:56.754587   46688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:32:56.827693   46688 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-19 02:32:56.814283176 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:32:56.827868   46688 docker.go:319] overlay module found
	I1219 02:32:56.830771   46688 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1219 02:32:56.832133   46688 start.go:309] selected driver: docker
	I1219 02:32:56.832152   46688 start.go:928] validating driver "docker" against &{Name:functional-736733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-736733 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:32:56.832260   46688 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:32:56.834133   46688 out.go:203] 
	W1219 02:32:56.835331   46688 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1219 02:32:56.836657   46688 out.go:203] 
	
	
	==> CRI-O <==
	Dec 19 02:33:07 functional-736733 crio[3712]: time="2025-12-19T02:33:07.240907182Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc" id=157163d7-e1a8-4ddc-9141-805a70536e34 name=/runtime.v1.ImageService/PullImage
	Dec 19 02:33:07 functional-736733 crio[3712]: time="2025-12-19T02:33:07.241596015Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2" id=b2e09d0e-dfbf-4ff7-ad96-8b2f59e332d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 02:33:07 functional-736733 crio[3712]: time="2025-12-19T02:33:07.243056611Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard-web:1.7.0" id=dcc02695-d526-4561-a785-57b4775defe1 name=/runtime.v1.ImageService/PullImage
	Dec 19 02:33:07 functional-736733 crio[3712]: time="2025-12-19T02:33:07.24352408Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2" id=f998ff73-48f8-4960-ba4e-b9950b8cf800 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 02:33:07 functional-736733 crio[3712]: time="2025-12-19T02:33:07.244728499Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard-web:1.7.0\""
	Dec 19 02:33:07 functional-736733 crio[3712]: time="2025-12-19T02:33:07.247257211Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-grxgh/kubernetes-dashboard-metrics-scraper" id=16188465-84f7-4b75-8715-de257048b15c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 02:33:07 functional-736733 crio[3712]: time="2025-12-19T02:33:07.247481662Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 02:33:07 functional-736733 crio[3712]: time="2025-12-19T02:33:07.252862426Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 02:33:07 functional-736733 crio[3712]: time="2025-12-19T02:33:07.253414301Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 02:33:07 functional-736733 crio[3712]: time="2025-12-19T02:33:07.279452612Z" level=info msg="Created container e5e5e4b3cb90c56b5c8c44d613ef4f795962c46f3afa9a78bf3687f09206bd98: kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-grxgh/kubernetes-dashboard-metrics-scraper" id=16188465-84f7-4b75-8715-de257048b15c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 02:33:07 functional-736733 crio[3712]: time="2025-12-19T02:33:07.280241004Z" level=info msg="Starting container: e5e5e4b3cb90c56b5c8c44d613ef4f795962c46f3afa9a78bf3687f09206bd98" id=40aa1c90-31fe-4732-a223-9e4565855ae8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 02:33:07 functional-736733 crio[3712]: time="2025-12-19T02:33:07.283250737Z" level=info msg="Started container" PID=7497 containerID=e5e5e4b3cb90c56b5c8c44d613ef4f795962c46f3afa9a78bf3687f09206bd98 description=kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-grxgh/kubernetes-dashboard-metrics-scraper id=40aa1c90-31fe-4732-a223-9e4565855ae8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f54e66112e02c3b3d6a0f4954a590819078afe25d6b1e5b58fb28c0206eaa1e6
	Dec 19 02:33:09 functional-736733 crio[3712]: time="2025-12-19T02:33:09.66554287Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30" id=dcc02695-d526-4561-a785-57b4775defe1 name=/runtime.v1.ImageService/PullImage
	Dec 19 02:33:09 functional-736733 crio[3712]: time="2025-12-19T02:33:09.666323125Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard-web:1.7.0" id=3d5d82c6-de83-4be1-aee5-882067d59d01 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 02:33:09 functional-736733 crio[3712]: time="2025-12-19T02:33:09.668041573Z" level=info msg="Pulling image: kong:3.9" id=885775cc-a6a5-490a-8dde-a883471165a8 name=/runtime.v1.ImageService/PullImage
	Dec 19 02:33:09 functional-736733 crio[3712]: time="2025-12-19T02:33:09.668228111Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 02:33:09 functional-736733 crio[3712]: time="2025-12-19T02:33:09.66848897Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard-web:1.7.0" id=b2c68f9c-ba36-4d48-b97e-771c58758f80 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 02:33:09 functional-736733 crio[3712]: time="2025-12-19T02:33:09.670030503Z" level=info msg="Trying to access \"docker.io/library/kong:3.9\""
	Dec 19 02:33:09 functional-736733 crio[3712]: time="2025-12-19T02:33:09.675805509Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-x9n98/kubernetes-dashboard-web" id=c837baad-54e9-41c3-8d87-44a729b1b6d7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 02:33:09 functional-736733 crio[3712]: time="2025-12-19T02:33:09.676051279Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 02:33:09 functional-736733 crio[3712]: time="2025-12-19T02:33:09.681886644Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 02:33:09 functional-736733 crio[3712]: time="2025-12-19T02:33:09.682905231Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 02:33:09 functional-736733 crio[3712]: time="2025-12-19T02:33:09.718661488Z" level=info msg="Created container b7f1df8531bcd77cf8d68b910611ad5c250fbe5673439bb8b6d18e49a3cdb7e5: kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-x9n98/kubernetes-dashboard-web" id=c837baad-54e9-41c3-8d87-44a729b1b6d7 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 02:33:09 functional-736733 crio[3712]: time="2025-12-19T02:33:09.720134016Z" level=info msg="Starting container: b7f1df8531bcd77cf8d68b910611ad5c250fbe5673439bb8b6d18e49a3cdb7e5" id=58731a34-4be6-4cf4-9916-f3679174a08d name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 02:33:09 functional-736733 crio[3712]: time="2025-12-19T02:33:09.723238762Z" level=info msg="Started container" PID=7811 containerID=b7f1df8531bcd77cf8d68b910611ad5c250fbe5673439bb8b6d18e49a3cdb7e5 description=kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-x9n98/kubernetes-dashboard-web id=58731a34-4be6-4cf4-9916-f3679174a08d name=/runtime.v1.RuntimeService/StartContainer sandboxID=560644ee1e14ea138ca4d373d420ad3426aa4b6ba45eb8d16f039b8c2a93e0cf
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED              STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	b7f1df8531bcd       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               2 seconds ago        Running             kubernetes-dashboard-web               0                   560644ee1e14e       kubernetes-dashboard-web-5c9f966b98-x9n98               kubernetes-dashboard
	e5e5e4b3cb90c       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   5 seconds ago        Running             kubernetes-dashboard-metrics-scraper   0                   f54e66112e02c       kubernetes-dashboard-metrics-scraper-7685fd8b77-grxgh   kubernetes-dashboard
	9bdc61ff3e5a4       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              5 seconds ago        Running             kubernetes-dashboard-auth              0                   b05f42970d49e       kubernetes-dashboard-auth-cc8dd7b5f-7chwp               kubernetes-dashboard
	1842c5ed7a6fa       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               7 seconds ago        Running             kubernetes-dashboard-api               0                   92586d4c42a2b       kubernetes-dashboard-api-7bbd59bb4f-jg76j               kubernetes-dashboard
	8812b52881148       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                        8 seconds ago        Exited              mount-munger                           0                   a9384344ac9be       busybox-mount                                           default
	e44787396dc7a       public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036                9 seconds ago        Running             mysql                                  0                   17f9abea45d6d       mysql-6bcdcbc558-8hsbz                                  default
	6e892fda3d36d       04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5                                                           9 seconds ago        Running             myfrontend                             0                   09f2ec91ed118       sp-pod                                                  default
	d83c824cea719       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                      23 seconds ago       Running             echo-server                            0                   9f64a114b5e3e       hello-node-connect-7d85dfc575-jncqv                     default
	9eb0adec98be2       public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c                         25 seconds ago       Running             nginx                                  0                   0b5b178b94c2e       nginx-svc                                               default
	ff872e999887c       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                      28 seconds ago       Running             echo-server                            0                   3bfe327577d2f       hello-node-75c85bcc94-kxt8k                             default
	5d2de0c10f819       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           59 seconds ago       Running             storage-provisioner                    2                   1febade454cb8       storage-provisioner                                     kube-system
	e25051f41abb1       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                           About a minute ago   Running             kube-apiserver                         0                   020146adc57b4       kube-apiserver-functional-736733                        kube-system
	49c858f4c51bc       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                           About a minute ago   Running             kube-scheduler                         1                   d8e9fc0240af2       kube-scheduler-functional-736733                        kube-system
	d1797619dce55       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                           About a minute ago   Running             kube-controller-manager                1                   9c5dfcc7a43f5       kube-controller-manager-functional-736733               kube-system
	9263088a15acf       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                           About a minute ago   Running             etcd                                   1                   094bfe21e3de5       etcd-functional-736733                                  kube-system
	273e463eac4ae       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                           About a minute ago   Running             coredns                                1                   deeed2274a0c8       coredns-66bc5c9577-w5mln                                kube-system
	045bd31ff56a2       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                           About a minute ago   Running             kube-proxy                             1                   a44bd9672ef6c       kube-proxy-2xpp7                                        kube-system
	5b89ae503416d       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                                           About a minute ago   Running             kindnet-cni                            1                   3c5ccc4b075ed       kindnet-2v289                                           kube-system
	9e33a9fed3c7a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           About a minute ago   Exited              storage-provisioner                    1                   1febade454cb8       storage-provisioner                                     kube-system
	ba9693fa535c5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                           About a minute ago   Exited              coredns                                0                   deeed2274a0c8       coredns-66bc5c9577-w5mln                                kube-system
	5df085352c58a       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27                         2 minutes ago        Exited              kindnet-cni                            0                   3c5ccc4b075ed       kindnet-2v289                                           kube-system
	e041554503f05       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                           2 minutes ago        Exited              kube-proxy                             0                   a44bd9672ef6c       kube-proxy-2xpp7                                        kube-system
	e18e5ce1d0b00       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                           2 minutes ago        Exited              kube-scheduler                         0                   d8e9fc0240af2       kube-scheduler-functional-736733                        kube-system
	d97ede7426f3f       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                           2 minutes ago        Exited              kube-controller-manager                0                   9c5dfcc7a43f5       kube-controller-manager-functional-736733               kube-system
	55f73ff751555       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                           2 minutes ago        Exited              etcd                                   0                   094bfe21e3de5       etcd-functional-736733                                  kube-system
	
	
	==> coredns [273e463eac4ae17860c36eac32783fb5581a098cc8d7fb2b7af0ffad57420239] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54099 - 23593 "HINFO IN 4065977834188116326.6594207970198951369. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045426348s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ba9693fa535c592e5ed3cce5e0a9b59ce26cfae3fc6ef1975f6a7a1fef30e0df] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36678 - 28983 "HINFO IN 1819689458378461868.8141669611936413371. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036380794s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-736733
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-736733
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=functional-736733
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T02_30_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 02:30:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-736733
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 02:33:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 02:32:53 +0000   Fri, 19 Dec 2025 02:30:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 02:32:53 +0000   Fri, 19 Dec 2025 02:30:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 02:32:53 +0000   Fri, 19 Dec 2025 02:30:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 02:32:53 +0000   Fri, 19 Dec 2025 02:31:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-736733
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                1cc0dcec-4ede-4329-9e0a-d28bfca013f5
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-kxt8k                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  default                     hello-node-connect-7d85dfc575-jncqv                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  default                     mysql-6bcdcbc558-8hsbz                                   600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     18s
	  default                     nginx-svc                                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  default                     sp-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-w5mln                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m11s
	  kube-system                 etcd-functional-736733                                   100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m17s
	  kube-system                 kindnet-2v289                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m11s
	  kube-system                 kube-apiserver-functional-736733                         250m (3%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-functional-736733                200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-proxy-2xpp7                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-scheduler-functional-736733                         100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kubernetes-dashboard        kubernetes-dashboard-api-7bbd59bb4f-jg76j                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     11s
	  kubernetes-dashboard        kubernetes-dashboard-auth-cc8dd7b5f-7chwp                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     11s
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-k25mm                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-grxgh    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     11s
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-x9n98                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1850m (23%)  1800m (22%)
	  memory             1532Mi (4%)  2520Mi (7%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m10s                  kube-proxy       
	  Normal  Starting                 51s                    kube-proxy       
	  Normal  Starting                 2m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m21s (x8 over 2m21s)  kubelet          Node functional-736733 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s (x8 over 2m21s)  kubelet          Node functional-736733 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s (x8 over 2m21s)  kubelet          Node functional-736733 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m17s                  kubelet          Node functional-736733 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m17s                  kubelet          Node functional-736733 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m17s                  kubelet          Node functional-736733 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m17s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m12s                  node-controller  Node functional-736733 event: Registered Node functional-736733 in Controller
	  Normal  NodeReady                119s                   kubelet          Node functional-736733 status is now: NodeReady
	  Normal  Starting                 82s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  82s (x9 over 82s)      kubelet          Node functional-736733 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s (x8 over 82s)      kubelet          Node functional-736733 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x7 over 82s)      kubelet          Node functional-736733 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           57s                    node-controller  Node functional-736733 event: Registered Node functional-736733 in Controller
	
	
	==> dmesg <==
	[  +0.091115] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025741] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.646270] kauditd_printk_skb: 47 callbacks suppressed
	[Dec19 02:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.041250] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.024871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.022884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +8.127187] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[ +16.382230] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[Dec19 02:28] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	
	
	==> etcd [55f73ff751555f9ed59b85593af21f138ce50f75ad2ac2e55344c89066dc5504] <==
	{"level":"warn","ts":"2025-12-19T02:30:53.023049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:30:53.029678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:30:53.035823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:30:53.049513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:30:53.056410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:30:53.063552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:30:53.107623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50436","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T02:31:48.906619Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-19T02:31:48.906717Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-736733","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-19T02:31:48.906845Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T02:31:48.908371Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T02:31:48.909792Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:31:48.909846Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-12-19T02:31:48.909856Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:31:48.909853Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:31:48.909926Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-12-19T02:31:48.909907Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-19T02:31:48.909940Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-19T02:31:48.909941Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:31:48.909953Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-12-19T02:31:48.909955Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:31:48.912100Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-19T02:31:48.912181Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:31:48.912210Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-19T02:31:48.912223Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-736733","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [9263088a15acff6a23685f12b907ac15597c353d51907544e82d1d6fa2c80c9e] <==
	{"level":"warn","ts":"2025-12-19T02:32:11.808442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.817719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.825588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.832096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.839556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.846418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.854022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.861894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.870127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.881882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.888360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.894907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.903277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.910852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.918230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.924931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.931262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.937647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.944615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.951171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.966972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.970393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.981768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:11.987095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:32:12.027271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56320","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 02:33:12 up 15 min,  0 user,  load average: 2.67, 1.48, 0.72
	Linux functional-736733 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5b89ae503416dddac4b2b67bf61be459f8ebd1a3b19c665f121d13411d08ec4b] <==
	E1219 02:31:42.306491       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1219 02:31:42.580213       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1219 02:31:42.667175       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1219 02:31:43.053016       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1219 02:31:45.842225       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1219 02:31:46.081937       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1219 02:31:48.821523       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1219 02:31:58.929682       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1219 02:32:06.290269       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1219 02:32:07.188682       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1219 02:32:07.389974       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1219 02:32:10.336745       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:35062->10.96.0.1:443: read: connection reset by peer" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1219 02:32:25.372600       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 02:32:25.372636       1 metrics.go:72] Registering metrics
	I1219 02:32:25.372723       1 controller.go:711] "Syncing nftables rules"
	I1219 02:32:29.180203       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:32:29.180244       1 main.go:301] handling current node
	I1219 02:32:39.180811       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:32:39.180848       1 main.go:301] handling current node
	I1219 02:32:49.181232       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:32:49.181280       1 main.go:301] handling current node
	I1219 02:32:59.180912       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:32:59.180960       1 main.go:301] handling current node
	I1219 02:33:09.180801       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:33:09.180854       1 main.go:301] handling current node
	
	
	==> kindnet [5df085352c58a6dc3e8f12f0591a0089ba074b1f4d4e4fd7b71d15bd0bee87e0] <==
	I1219 02:31:03.320577       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1219 02:31:03.320853       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1219 02:31:03.320996       1 main.go:148] setting mtu 1500 for CNI 
	I1219 02:31:03.321017       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 02:31:03.321038       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T02:31:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 02:31:03.521137       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 02:31:03.521166       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 02:31:03.521195       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 02:31:03.521329       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 02:31:03.918566       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 02:31:03.918605       1 metrics.go:72] Registering metrics
	I1219 02:31:03.918677       1 controller.go:711] "Syncing nftables rules"
	I1219 02:31:13.522087       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:31:13.522181       1 main.go:301] handling current node
	I1219 02:31:23.528828       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:31:23.528865       1 main.go:301] handling current node
	I1219 02:31:33.521691       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:31:33.521758       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e25051f41abb150cc6864b7cc162db984b366e0cc1cc9ff287483b6ab9229067] <==
	I1219 02:32:44.909160       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.105.99.6"}
	I1219 02:32:48.435429       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.200.232"}
	I1219 02:32:54.762530       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.130.7"}
	E1219 02:32:57.855558       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:43524: use of closed network connection
	I1219 02:32:59.105994       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 02:32:59.118262       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 02:32:59.124778       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 02:32:59.135657       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 02:32:59.145797       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 02:32:59.154603       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 02:32:59.162723       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 02:32:59.172964       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 02:32:59.206953       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 02:32:59.224858       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 02:32:59.232566       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 02:32:59.240852       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 02:33:01.739226       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 02:33:01.796112       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.101.25.154"}
	I1219 02:33:01.802357       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.99.55.25"}
	I1219 02:33:01.805959       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.109.127.71"}
	I1219 02:33:01.814793       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.111.182.66"}
	I1219 02:33:01.814841       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.182.124"}
	E1219 02:33:09.192466       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:51542: use of closed network connection
	E1219 02:33:09.907091       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:51566: use of closed network connection
	E1219 02:33:10.802458       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:51584: use of closed network connection
	
	
	==> kube-controller-manager [d1797619dce55dae5dfcc59f53a6be91a11d79c5e2e2768e5066e3f2b47bb665] <==
	I1219 02:32:15.750241       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 02:32:15.750254       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 02:32:15.750262       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1219 02:32:15.753235       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1219 02:32:15.755524       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1219 02:32:15.783013       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1219 02:32:15.783037       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 02:32:15.783052       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1219 02:32:15.783108       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1219 02:32:15.783123       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1219 02:32:15.783130       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1219 02:32:15.783240       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1219 02:32:15.783453       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 02:32:15.783798       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1219 02:32:15.785063       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1219 02:32:15.785095       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 02:32:15.787724       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 02:32:15.787734       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 02:32:15.790903       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1219 02:32:15.793173       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1219 02:32:15.794297       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1219 02:32:15.796605       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1219 02:32:15.797834       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1219 02:32:15.800093       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1219 02:32:15.806236       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [d97ede7426f3f7058eaff16a1328b891fc38f0c0fcef35d17c0eb97edcd56047] <==
	I1219 02:31:00.480774       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1219 02:31:00.480797       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1219 02:31:00.481141       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1219 02:31:00.481493       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1219 02:31:00.481587       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1219 02:31:00.481617       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 02:31:00.481734       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 02:31:00.481747       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1219 02:31:00.481997       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1219 02:31:00.482860       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1219 02:31:00.482887       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 02:31:00.484126       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1219 02:31:00.485574       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 02:31:00.486400       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1219 02:31:00.486466       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1219 02:31:00.486527       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1219 02:31:00.486536       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1219 02:31:00.486543       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1219 02:31:00.486645       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1219 02:31:00.489946       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 02:31:00.491555       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1219 02:31:00.493060       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-736733" podCIDRs=["10.244.0.0/24"]
	I1219 02:31:00.497275       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1219 02:31:00.502083       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 02:31:15.482475       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [045bd31ff56a206867698accd0d6a0b3212e8ecd4b72d4335434a647d089fa69] <==
	E1219 02:31:38.896460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-736733&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 02:31:39.891652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-736733&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 02:31:42.699226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-736733&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 02:31:47.603267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-736733&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 02:32:07.223587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-736733&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1219 02:32:21.295697       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 02:32:21.295753       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1219 02:32:21.295826       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:32:21.315077       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 02:32:21.315126       1 server_linux.go:132] "Using iptables Proxier"
	I1219 02:32:21.320600       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:32:21.320935       1 server.go:527] "Version info" version="v1.34.3"
	I1219 02:32:21.320964       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:32:21.322405       1 config.go:200] "Starting service config controller"
	I1219 02:32:21.322428       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:32:21.322514       1 config.go:309] "Starting node config controller"
	I1219 02:32:21.322526       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:32:21.322532       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 02:32:21.322536       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:32:21.322795       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:32:21.322723       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:32:21.322869       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:32:21.422644       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 02:32:21.423027       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 02:32:21.423037       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [e041554503f0526f5e407d2e883b76447ca43eac55b70affb71177c22ec2b489] <==
	I1219 02:31:01.973089       1 server_linux.go:53] "Using iptables proxy"
	I1219 02:31:02.051692       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 02:31:02.152762       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 02:31:02.152793       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1219 02:31:02.152859       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:31:02.172148       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 02:31:02.172248       1 server_linux.go:132] "Using iptables Proxier"
	I1219 02:31:02.177286       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:31:02.177729       1 server.go:527] "Version info" version="v1.34.3"
	I1219 02:31:02.177763       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:31:02.179179       1 config.go:200] "Starting service config controller"
	I1219 02:31:02.179198       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:31:02.179234       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:31:02.179266       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:31:02.179318       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:31:02.179350       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:31:02.179355       1 config.go:309] "Starting node config controller"
	I1219 02:31:02.179411       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:31:02.179418       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 02:31:02.279370       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 02:31:02.279386       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 02:31:02.279417       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [49c858f4c51bc16530ae0ee5b6bd23e3ca1d7f11d7df88522ba7121564bcd1f4] <==
	I1219 02:32:11.080186       1 serving.go:386] Generated self-signed cert in-memory
	W1219 02:32:12.376996       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 02:32:12.377033       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 02:32:12.377051       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 02:32:12.377061       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 02:32:12.413348       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 02:32:12.413387       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:32:12.417469       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:32:12.417520       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:32:12.418579       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 02:32:12.418660       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 02:32:12.518131       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [e18e5ce1d0b007d185e4d9986209b8d4151ce0a92fc20c2b0ee08dce2492adcf] <==
	E1219 02:30:53.513623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 02:30:53.513638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 02:30:53.513697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 02:30:53.513721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 02:30:53.513810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 02:30:53.514336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1219 02:30:53.514409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 02:30:53.514503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1219 02:30:53.514511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 02:30:54.388057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 02:30:54.412692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1219 02:30:54.437150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1219 02:30:54.560935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 02:30:54.660457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 02:30:54.667562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 02:30:54.669425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 02:30:54.672465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1219 02:30:54.678490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1219 02:30:57.704502       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:31:48.688582       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:31:48.688581       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1219 02:31:48.688813       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1219 02:31:48.688839       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1219 02:31:48.688855       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1219 02:31:48.688876       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 19 02:33:01 functional-736733 kubelet[4263]: I1219 02:33:01.917114    4263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5bvz\" (UniqueName: \"kubernetes.io/projected/618eb4b8-ba9d-4760-bd41-6a32da88f3f4-kube-api-access-x5bvz\") pod \"kubernetes-dashboard-metrics-scraper-7685fd8b77-grxgh\" (UID: \"618eb4b8-ba9d-4760-bd41-6a32da88f3f4\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-grxgh"
	Dec 19 02:33:01 functional-736733 kubelet[4263]: I1219 02:33:01.917145    4263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkpf6\" (UniqueName: \"kubernetes.io/projected/8bca22bc-d10c-4214-9547-990c2a0c507a-kube-api-access-lkpf6\") pod \"kubernetes-dashboard-api-7bbd59bb4f-jg76j\" (UID: \"8bca22bc-d10c-4214-9547-990c2a0c507a\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-7bbd59bb4f-jg76j"
	Dec 19 02:33:01 functional-736733 kubelet[4263]: I1219 02:33:01.917170    4263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmxnk\" (UniqueName: \"kubernetes.io/projected/b52aeebc-0a07-4e93-aca8-d98a8b517843-kube-api-access-tmxnk\") pod \"kubernetes-dashboard-auth-cc8dd7b5f-7chwp\" (UID: \"b52aeebc-0a07-4e93-aca8-d98a8b517843\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-cc8dd7b5f-7chwp"
	Dec 19 02:33:01 functional-736733 kubelet[4263]: I1219 02:33:01.917200    4263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8bca22bc-d10c-4214-9547-990c2a0c507a-tmp-volume\") pod \"kubernetes-dashboard-api-7bbd59bb4f-jg76j\" (UID: \"8bca22bc-d10c-4214-9547-990c2a0c507a\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-7bbd59bb4f-jg76j"
	Dec 19 02:33:01 functional-736733 kubelet[4263]: I1219 02:33:01.917222    4263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-tmp\" (UniqueName: \"kubernetes.io/empty-dir/f1f26faa-ba36-49e0-beac-347ee1a521c2-kubernetes-dashboard-kong-tmp\") pod \"kubernetes-dashboard-kong-9849c64bd-k25mm\" (UID: \"f1f26faa-ba36-49e0-beac-347ee1a521c2\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-k25mm"
	Dec 19 02:33:03 functional-736733 kubelet[4263]: I1219 02:33:03.946387    4263 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/mysql-6bcdcbc558-8hsbz" podStartSLOduration=2.177032426 podStartE2EDuration="9.94636705s" podCreationTimestamp="2025-12-19 02:32:54 +0000 UTC" firstStartedPulling="2025-12-19 02:32:55.148680478 +0000 UTC m=+64.535105592" lastFinishedPulling="2025-12-19 02:33:02.918015082 +0000 UTC m=+72.304440216" observedRunningTime="2025-12-19 02:33:03.946062823 +0000 UTC m=+73.332487958" watchObservedRunningTime="2025-12-19 02:33:03.94636705 +0000 UTC m=+73.332792185"
	Dec 19 02:33:03 functional-736733 kubelet[4263]: I1219 02:33:03.956792    4263 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=3.95676727 podStartE2EDuration="3.95676727s" podCreationTimestamp="2025-12-19 02:33:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 02:33:03.95630084 +0000 UTC m=+73.342725975" watchObservedRunningTime="2025-12-19 02:33:03.95676727 +0000 UTC m=+73.343192405"
	Dec 19 02:33:05 functional-736733 kubelet[4263]: I1219 02:33:05.164170    4263 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 02:33:05 functional-736733 kubelet[4263]: I1219 02:33:05.164246    4263 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 02:33:06 functional-736733 kubelet[4263]: I1219 02:33:06.051902    4263 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-api-7bbd59bb4f-jg76j" podStartSLOduration=2.761507419 podStartE2EDuration="5.051881615s" podCreationTimestamp="2025-12-19 02:33:01 +0000 UTC" firstStartedPulling="2025-12-19 02:33:02.873544533 +0000 UTC m=+72.259969664" lastFinishedPulling="2025-12-19 02:33:05.163918733 +0000 UTC m=+74.550343860" observedRunningTime="2025-12-19 02:33:06.051650844 +0000 UTC m=+75.438075979" watchObservedRunningTime="2025-12-19 02:33:06.051881615 +0000 UTC m=+75.438306751"
	Dec 19 02:33:06 functional-736733 kubelet[4263]: I1219 02:33:06.143416    4263 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/be820093-731a-4e77-9032-7a7d1a830c0b-test-volume\") pod \"be820093-731a-4e77-9032-7a7d1a830c0b\" (UID: \"be820093-731a-4e77-9032-7a7d1a830c0b\") "
	Dec 19 02:33:06 functional-736733 kubelet[4263]: I1219 02:33:06.143504    4263 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vz9q\" (UniqueName: \"kubernetes.io/projected/be820093-731a-4e77-9032-7a7d1a830c0b-kube-api-access-7vz9q\") pod \"be820093-731a-4e77-9032-7a7d1a830c0b\" (UID: \"be820093-731a-4e77-9032-7a7d1a830c0b\") "
	Dec 19 02:33:06 functional-736733 kubelet[4263]: I1219 02:33:06.143546    4263 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be820093-731a-4e77-9032-7a7d1a830c0b-test-volume" (OuterVolumeSpecName: "test-volume") pod "be820093-731a-4e77-9032-7a7d1a830c0b" (UID: "be820093-731a-4e77-9032-7a7d1a830c0b"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 19 02:33:06 functional-736733 kubelet[4263]: I1219 02:33:06.143666    4263 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/be820093-731a-4e77-9032-7a7d1a830c0b-test-volume\") on node \"functional-736733\" DevicePath \"\""
	Dec 19 02:33:06 functional-736733 kubelet[4263]: I1219 02:33:06.146280    4263 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be820093-731a-4e77-9032-7a7d1a830c0b-kube-api-access-7vz9q" (OuterVolumeSpecName: "kube-api-access-7vz9q") pod "be820093-731a-4e77-9032-7a7d1a830c0b" (UID: "be820093-731a-4e77-9032-7a7d1a830c0b"). InnerVolumeSpecName "kube-api-access-7vz9q". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 19 02:33:06 functional-736733 kubelet[4263]: I1219 02:33:06.244194    4263 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7vz9q\" (UniqueName: \"kubernetes.io/projected/be820093-731a-4e77-9032-7a7d1a830c0b-kube-api-access-7vz9q\") on node \"functional-736733\" DevicePath \"\""
	Dec 19 02:33:06 functional-736733 kubelet[4263]: I1219 02:33:06.426542    4263 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 02:33:06 functional-736733 kubelet[4263]: I1219 02:33:06.426620    4263 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 02:33:06 functional-736733 kubelet[4263]: I1219 02:33:06.952046    4263 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9384344ac9be1aa662d1e7f8f03c4e3110c651f51245971df94342e0c477916"
	Dec 19 02:33:06 functional-736733 kubelet[4263]: I1219 02:33:06.968732    4263 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-auth-cc8dd7b5f-7chwp" podStartSLOduration=2.416215356 podStartE2EDuration="5.968684437s" podCreationTimestamp="2025-12-19 02:33:01 +0000 UTC" firstStartedPulling="2025-12-19 02:33:02.873827116 +0000 UTC m=+72.260252243" lastFinishedPulling="2025-12-19 02:33:06.426296189 +0000 UTC m=+75.812721324" observedRunningTime="2025-12-19 02:33:06.968356773 +0000 UTC m=+76.354781911" watchObservedRunningTime="2025-12-19 02:33:06.968684437 +0000 UTC m=+76.355109572"
	Dec 19 02:33:07 functional-736733 kubelet[4263]: I1219 02:33:07.242944    4263 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 02:33:07 functional-736733 kubelet[4263]: I1219 02:33:07.243042    4263 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 02:33:07 functional-736733 kubelet[4263]: I1219 02:33:07.980045    4263 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-grxgh" podStartSLOduration=2.614039338 podStartE2EDuration="6.98000727s" podCreationTimestamp="2025-12-19 02:33:01 +0000 UTC" firstStartedPulling="2025-12-19 02:33:02.876758278 +0000 UTC m=+72.263183410" lastFinishedPulling="2025-12-19 02:33:07.242726208 +0000 UTC m=+76.629151342" observedRunningTime="2025-12-19 02:33:07.97940405 +0000 UTC m=+77.365829206" watchObservedRunningTime="2025-12-19 02:33:07.98000727 +0000 UTC m=+77.366432405"
	Dec 19 02:33:09 functional-736733 kubelet[4263]: I1219 02:33:09.667928    4263 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 02:33:09 functional-736733 kubelet[4263]: I1219 02:33:09.668019    4263 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	
	
	==> kubernetes-dashboard [1842c5ed7a6fa7a24bf2b548d3aa4d1d321f579c53fcd8f5e8de304cf60249ed] <==
	I1219 02:33:05.251731       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 02:33:05.251800       1 init.go:49] Using in-cluster config
	I1219 02:33:05.251984       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 02:33:05.251996       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 02:33:05.252000       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 02:33:05.252004       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 02:33:05.257362       1 main.go:119] "Successful initial request to the apiserver" version="v1.34.3"
	I1219 02:33:05.257385       1 client.go:265] Creating in-cluster Sidecar client
	I1219 02:33:05.264348       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 02:33:05.266872       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> kubernetes-dashboard [9bdc61ff3e5a46abf7825e3f2b1925781fa3f100ab7b80666e2a14931a79ce00] <==
	I1219 02:33:06.509620       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 02:33:06.509687       1 init.go:49] Using in-cluster config
	I1219 02:33:06.509823       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [b7f1df8531bcd77cf8d68b910611ad5c250fbe5673439bb8b6d18e49a3cdb7e5] <==
	I1219 02:33:09.813124       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 02:33:09.813221       1 init.go:48] Using in-cluster config
	I1219 02:33:09.813450       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [e5e5e4b3cb90c56b5c8c44d613ef4f795962c46f3afa9a78bf3687f09206bd98] <==
	I1219 02:33:07.302886       1 main.go:43] "Starting Metrics Scraper" version="1.2.2"
	W1219 02:33:07.302954       1 client_config.go:667] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I1219 02:33:07.303101       1 main.go:51] Kubernetes host: https://10.96.0.1:443
	I1219 02:33:07.303108       1 main.go:52] Namespace(s): []
	
	
	==> storage-provisioner [5d2de0c10f8194950fa281ab59717a55f385d25c4c8c56ed60b0a4dde27921db] <==
	I1219 02:32:50.560568       1 volume_store.go:212] Trying to save persistentvolume "pvc-7f24b592-a1f4-40d4-a4fc-10e21ce00002"
	I1219 02:32:50.567347       1 volume_store.go:219] persistentvolume "pvc-7f24b592-a1f4-40d4-a4fc-10e21ce00002" saved
	I1219 02:32:50.567423       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"7f24b592-a1f4-40d4-a4fc-10e21ce00002", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-7f24b592-a1f4-40d4-a4fc-10e21ce00002
	W1219 02:32:52.527854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:52.533971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:54.537287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:54.541258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:56.545612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:56.554328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:58.559100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:32:58.575454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:00.579269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:00.634606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:02.637841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:02.650681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:04.654203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:04.658487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:06.661502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:06.666596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:08.670770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:08.674898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:10.684418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:10.697335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:12.702006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:12.707912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9e33a9fed3c7a9f475400cec122acd478efa30d06ac384a6bb35db60e74d1597] <==
	I1219 02:31:38.803220       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 02:31:38.804864       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-736733 -n functional-736733
helpers_test.go:270: (dbg) Run:  kubectl --context functional-736733 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount kubernetes-dashboard-kong-9849c64bd-k25mm
helpers_test.go:283: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-736733 describe pod busybox-mount kubernetes-dashboard-kong-9849c64bd-k25mm
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-736733 describe pod busybox-mount kubernetes-dashboard-kong-9849c64bd-k25mm: exit status 1 (79.055819ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-736733/192.168.49.2
	Start Time:       Fri, 19 Dec 2025 02:32:55 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://8812b528811480a20f4ea6d210d8720ac6c702e64e8ec691531dbe411910af79
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 19 Dec 2025 02:33:04 +0000
	      Finished:     Fri, 19 Dec 2025 02:33:04 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7vz9q (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-7vz9q:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  18s   default-scheduler  Successfully assigned default/busybox-mount to functional-736733
	  Normal  Pulling    17s   kubelet            spec.containers{mount-munger}: Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9s    kubelet            spec.containers{mount-munger}: Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.322s (8.127s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9s    kubelet            spec.containers{mount-munger}: Created container: mount-munger
	  Normal  Started    9s    kubelet            spec.containers{mount-munger}: Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-kong-9849c64bd-k25mm" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-736733 describe pod busybox-mount kubernetes-dashboard-kong-9849c64bd-k25mm: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (17.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (19.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-382801 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-382801 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-382801 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-382801 --alsologtostderr -v=1] stderr:
I1219 02:35:13.958654   67456 out.go:360] Setting OutFile to fd 1 ...
I1219 02:35:13.958781   67456 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:35:13.958790   67456 out.go:374] Setting ErrFile to fd 2...
I1219 02:35:13.958795   67456 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:35:13.958987   67456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
I1219 02:35:13.959226   67456 mustload.go:66] Loading cluster: functional-382801
I1219 02:35:13.959582   67456 config.go:182] Loaded profile config "functional-382801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:35:13.960024   67456 cli_runner.go:164] Run: docker container inspect functional-382801 --format={{.State.Status}}
I1219 02:35:13.980966   67456 host.go:66] Checking if "functional-382801" exists ...
I1219 02:35:13.981322   67456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1219 02:35:14.042639   67456 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-19 02:35:14.029991157 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1219 02:35:14.042803   67456 api_server.go:166] Checking apiserver status ...
I1219 02:35:14.042878   67456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1219 02:35:14.042945   67456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-382801
I1219 02:35:14.063595   67456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/functional-382801/id_rsa Username:docker}
I1219 02:35:14.171484   67456 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4502/cgroup
W1219 02:35:14.179788   67456 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4502/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1219 02:35:14.179876   67456 ssh_runner.go:195] Run: ls
I1219 02:35:14.183589   67456 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1219 02:35:14.187636   67456 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1219 02:35:14.187691   67456 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1219 02:35:14.187864   67456 config.go:182] Loaded profile config "functional-382801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:35:14.187882   67456 addons.go:70] Setting dashboard=true in profile "functional-382801"
I1219 02:35:14.187891   67456 addons.go:239] Setting addon dashboard=true in "functional-382801"
I1219 02:35:14.187924   67456 host.go:66] Checking if "functional-382801" exists ...
I1219 02:35:14.188253   67456 cli_runner.go:164] Run: docker container inspect functional-382801 --format={{.State.Status}}
I1219 02:35:14.206247   67456 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
I1219 02:35:14.206273   67456 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
I1219 02:35:14.206320   67456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-382801
I1219 02:35:14.225768   67456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/functional-382801/id_rsa Username:docker}
I1219 02:35:14.337937   67456 ssh_runner.go:195] Run: test -f /usr/bin/helm
I1219 02:35:14.342201   67456 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
I1219 02:35:14.345470   67456 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
I1219 02:35:15.312610   67456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
I1219 02:35:18.665908   67456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.353239938s)
I1219 02:35:18.666055   67456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
I1219 02:35:18.905833   67456 addons.go:500] Verifying addon dashboard=true in "functional-382801"
I1219 02:35:18.906231   67456 cli_runner.go:164] Run: docker container inspect functional-382801 --format={{.State.Status}}
I1219 02:35:18.931955   67456 out.go:179] * Verifying dashboard addon...
I1219 02:35:18.934150   67456 kapi.go:59] client config for functional-382801: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.key", CAFile:"/home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:35:18.934692   67456 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1219 02:35:18.934726   67456 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1219 02:35:18.934734   67456 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1219 02:35:18.934740   67456 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1219 02:35:18.934750   67456 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1219 02:35:18.935141   67456 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
I1219 02:35:18.946973   67456 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
I1219 02:35:18.946997   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:19.440046   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:19.939592   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:20.439932   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:20.938626   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:21.440181   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:21.941635   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:22.439427   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:22.939813   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:23.439072   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:23.939522   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:24.439292   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:24.939278   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:25.549060   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:25.939197   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:26.438367   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:26.939636   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:27.439424   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:27.939574   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:28.439156   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:28.939143   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:29.439192   67456 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:35:29.939450   67456 kapi.go:107] duration metric: took 11.00431031s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
I1219 02:35:29.942155   67456 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-382801 addons enable metrics-server

                                                
                                                
I1219 02:35:29.943582   67456 addons.go:202] Writing out "functional-382801" config to set dashboard=true...
W1219 02:35:29.943908   67456 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1219 02:35:29.945127   67456 kapi.go:59] client config for functional-382801: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.key", CAFile:"/home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:35:29.948953   67456 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard-kong-proxy  kubernetes-dashboard  49191659-ee3e-4793-9287-bbff167ef9c2 840 0 2025-12-19 02:35:18 +0000 UTC <nil> <nil> map[app.kubernetes.io/instance:kubernetes-dashboard app.kubernetes.io/managed-by:Helm app.kubernetes.io/name:kong app.kubernetes.io/version:3.9 enable-metrics:true helm.sh/chart:kong-2.52.0] map[meta.helm.sh/release-name:kubernetes-dashboard meta.helm.sh/release-namespace:kubernetes-dashboard] [] [] [{helm Update v1 2025-12-19 02:35:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:app.kubernetes.io/version":{},"f:enable-metrics":{},"f:helm.sh/chart":{}}},"f:spec":{"f:externalTrafficPolicy":{},"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".
":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:kong-proxy-tls,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:32412,AppProtocol:nil,},},Selector:map[string]string{app.kubernetes.io/component: app,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/name: kong,},ClusterIP:10.106.17.101,Type:NodePort,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:Cluster,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.106.17.101],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
I1219 02:35:29.949493   67456 host.go:66] Checking if "functional-382801" exists ...
I1219 02:35:29.950431   67456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-382801
I1219 02:35:29.978766   67456 kapi.go:59] client config for functional-382801: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.key", CAFile:"/home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:35:29.990346   67456 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:35:29.994874   67456 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:35:30.001079   67456 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:35:30.008149   67456 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:35:30.183629   67456 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:35:30.276785   67456 out.go:179] * Dashboard Token:
I1219 02:35:30.281119   67456 out.go:203] eyJhbGciOiJSUzI1NiIsImtpZCI6Ii1qdTdHTmVCQWpsZkN4QmZMTDRWZjFVaEJBUTRvYmlpVEF2aW8wbDBCZ00ifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzY2MTk4MTMwLCJpYXQiOjE3NjYxMTE3MzAsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiZWY5MWE5YmEtNzAxMS00NDdmLTkyMjQtYzI1MDMyZmM0ZGRiIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiM2E1ZTE0NDMtMTI1MC00NGRiLTg2ODktNzg5YmUxZWI1ZTVhIn19LCJuYmYiOjE3NjYxMTE3MzAsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.aNmstT-YKRlmjbQ-dGzHNGIRZduVGDgK5mRe74IlEkEOrclRkEn3JVjXbR2i5HCiYPhMiavJmNaifzxft7pIFy1fHD5dxbDOFaOsVq9RodS_ZyFUjU6yp-LvHi4VU9C4U3PK-fTLvtsPB7yvrZ1UlSzUSk2OEVJeDsN4BE1azfDS5v_slOROpca2lKKAluUzIGrkJof8LSGNtlJgddA1eki-x9F0AzmJNWGN-3BXLYu2GLvb7B0h-Q3bdgUELErReLu6qFSa_NAWOSUDSJ-MA-NojoplWJmLlWdtjdOlMD9AlZ2bcE5odcZp91in273CHxtqzCh4ocqvAECD5rKv_Q
I1219 02:35:30.297940   67456 out.go:203] https://192.168.49.2:32412
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-382801
helpers_test.go:244: (dbg) docker inspect functional-382801:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7defa4a84d90498c6d2ac2d1762d4e5ba6288a297f6c01abc92ee741d8f78290",
	        "Created": "2025-12-19T02:33:24.827618033Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 53940,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T02:33:24.86143371Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/7defa4a84d90498c6d2ac2d1762d4e5ba6288a297f6c01abc92ee741d8f78290/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7defa4a84d90498c6d2ac2d1762d4e5ba6288a297f6c01abc92ee741d8f78290/hostname",
	        "HostsPath": "/var/lib/docker/containers/7defa4a84d90498c6d2ac2d1762d4e5ba6288a297f6c01abc92ee741d8f78290/hosts",
	        "LogPath": "/var/lib/docker/containers/7defa4a84d90498c6d2ac2d1762d4e5ba6288a297f6c01abc92ee741d8f78290/7defa4a84d90498c6d2ac2d1762d4e5ba6288a297f6c01abc92ee741d8f78290-json.log",
	        "Name": "/functional-382801",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-382801:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-382801",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7defa4a84d90498c6d2ac2d1762d4e5ba6288a297f6c01abc92ee741d8f78290",
	                "LowerDir": "/var/lib/docker/overlay2/a7048ddf1bc9399a2c5342081d908247b61febd6a676b4942e89465dc8d3fd34-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a7048ddf1bc9399a2c5342081d908247b61febd6a676b4942e89465dc8d3fd34/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a7048ddf1bc9399a2c5342081d908247b61febd6a676b4942e89465dc8d3fd34/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a7048ddf1bc9399a2c5342081d908247b61febd6a676b4942e89465dc8d3fd34/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-382801",
	                "Source": "/var/lib/docker/volumes/functional-382801/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-382801",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-382801",
	                "name.minikube.sigs.k8s.io": "functional-382801",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f40b9803657af0523a42f4f128366b74cf1f62c9f7c47d65ea340484f1fe92d5",
	            "SandboxKey": "/var/run/docker/netns/f40b9803657a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-382801": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f8986ecdda03099d80d44dc450be92f62c711a67baa5618e3b1b458f3ed80bf4",
	                    "EndpointID": "5792390ef95156783534003720fbf6fd83a7d5b9515e125a1fd94bdb79d32bb9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "6e:d3:49:dd:90:c0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-382801",
	                        "7defa4a84d90"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-382801 -n functional-382801
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-382801 logs -n 25: (2.318087456s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-382801 ssh stat /mount-9p/created-by-test                                                                                                │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │ 19 Dec 25 02:35 UTC │
	│ ssh            │ functional-382801 ssh stat /mount-9p/created-by-pod                                                                                                 │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │ 19 Dec 25 02:35 UTC │
	│ ssh            │ functional-382801 ssh sudo umount -f /mount-9p                                                                                                      │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │ 19 Dec 25 02:35 UTC │
	│ mount          │ -p functional-382801 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4009361728/001:/mount-9p --alsologtostderr -v=1 --port 45901 │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │                     │
	│ ssh            │ functional-382801 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │                     │
	│ ssh            │ functional-382801 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │ 19 Dec 25 02:35 UTC │
	│ ssh            │ functional-382801 ssh -- ls -la /mount-9p                                                                                                           │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │ 19 Dec 25 02:35 UTC │
	│ ssh            │ functional-382801 ssh sudo umount -f /mount-9p                                                                                                      │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │                     │
	│ mount          │ -p functional-382801 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1586544246/001:/mount2 --alsologtostderr -v=1                │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │                     │
	│ mount          │ -p functional-382801 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1586544246/001:/mount3 --alsologtostderr -v=1                │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │                     │
	│ mount          │ -p functional-382801 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1586544246/001:/mount1 --alsologtostderr -v=1                │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │                     │
	│ ssh            │ functional-382801 ssh findmnt -T /mount1                                                                                                            │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │                     │
	│ ssh            │ functional-382801 ssh findmnt -T /mount1                                                                                                            │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │ 19 Dec 25 02:35 UTC │
	│ ssh            │ functional-382801 ssh findmnt -T /mount2                                                                                                            │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │ 19 Dec 25 02:35 UTC │
	│ ssh            │ functional-382801 ssh findmnt -T /mount3                                                                                                            │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │ 19 Dec 25 02:35 UTC │
	│ mount          │ -p functional-382801 --kill=true                                                                                                                    │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │                     │
	│ update-context │ functional-382801 update-context --alsologtostderr -v=2                                                                                             │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │ 19 Dec 25 02:35 UTC │
	│ update-context │ functional-382801 update-context --alsologtostderr -v=2                                                                                             │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │ 19 Dec 25 02:35 UTC │
	│ update-context │ functional-382801 update-context --alsologtostderr -v=2                                                                                             │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │ 19 Dec 25 02:35 UTC │
	│ image          │ functional-382801 image ls --format short --alsologtostderr                                                                                         │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │ 19 Dec 25 02:35 UTC │
	│ image          │ functional-382801 image ls --format yaml --alsologtostderr                                                                                          │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │ 19 Dec 25 02:35 UTC │
	│ ssh            │ functional-382801 ssh pgrep buildkitd                                                                                                               │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │                     │
	│ image          │ functional-382801 image build -t localhost/my-image:functional-382801 testdata/build --alsologtostderr                                              │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │ 19 Dec 25 02:35 UTC │
	│ image          │ functional-382801 image ls                                                                                                                          │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │                     │
	│ image          │ functional-382801 image ls --format json --alsologtostderr                                                                                          │ functional-382801 │ jenkins │ v1.37.0 │ 19 Dec 25 02:35 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:35:13
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:35:13.795134   67328 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:35:13.795255   67328 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:35:13.795267   67328 out.go:374] Setting ErrFile to fd 2...
	I1219 02:35:13.795274   67328 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:35:13.795634   67328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:35:13.796127   67328 out.go:368] Setting JSON to false
	I1219 02:35:13.797228   67328 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1065,"bootTime":1766110649,"procs":263,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:35:13.797285   67328 start.go:143] virtualization: kvm guest
	I1219 02:35:13.799159   67328 out.go:179] * [functional-382801] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1219 02:35:13.800378   67328 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:35:13.800365   67328 notify.go:221] Checking for updates...
	I1219 02:35:13.802746   67328 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:35:13.804239   67328 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 02:35:13.805632   67328 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 02:35:13.807577   67328 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:35:13.808694   67328 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:35:13.810116   67328 config.go:182] Loaded profile config "functional-382801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 02:35:13.810744   67328 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:35:13.835634   67328 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:35:13.835775   67328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:35:13.889870   67328 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-19 02:35:13.880473265 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:35:13.889992   67328 docker.go:319] overlay module found
	I1219 02:35:13.891763   67328 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1219 02:35:13.892819   67328 start.go:309] selected driver: docker
	I1219 02:35:13.892835   67328 start.go:928] validating driver "docker" against &{Name:functional-382801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-382801 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:35:13.892916   67328 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:35:13.894485   67328 out.go:203] 
	W1219 02:35:13.895619   67328 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1219 02:35:13.896726   67328 out.go:203] 
	
	
	==> CRI-O <==
	Dec 19 02:35:26 functional-382801 crio[3683]: time="2025-12-19T02:35:26.805004096Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc" id=dc08ce6e-3415-4e44-83e0-e4e02b0b1c18 name=/runtime.v1.ImageService/PullImage
	Dec 19 02:35:26 functional-382801 crio[3683]: time="2025-12-19T02:35:26.805628432Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2" id=060a8ed7-17ba-495a-b101-9394d3c3aeac name=/runtime.v1.ImageService/ImageStatus
	Dec 19 02:35:26 functional-382801 crio[3683]: time="2025-12-19T02:35:26.806847974Z" level=info msg="Pulling image: docker.io/kubernetesui/dashboard-web:1.7.0" id=228ea70e-2de0-4002-9941-28445e6531d7 name=/runtime.v1.ImageService/PullImage
	Dec 19 02:35:26 functional-382801 crio[3683]: time="2025-12-19T02:35:26.80783578Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2" id=810cff36-fff0-4b9f-a396-3d4e35985777 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 02:35:26 functional-382801 crio[3683]: time="2025-12-19T02:35:26.808380019Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard-web:1.7.0\""
	Dec 19 02:35:26 functional-382801 crio[3683]: time="2025-12-19T02:35:26.811939263Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-27766/kubernetes-dashboard-metrics-scraper" id=8d9ea095-e782-4ea4-badb-9368a876f0ed name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 02:35:26 functional-382801 crio[3683]: time="2025-12-19T02:35:26.812082553Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 02:35:26 functional-382801 crio[3683]: time="2025-12-19T02:35:26.816478922Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 02:35:26 functional-382801 crio[3683]: time="2025-12-19T02:35:26.817067841Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 02:35:26 functional-382801 crio[3683]: time="2025-12-19T02:35:26.848570842Z" level=info msg="Created container a775927e45d13a16a427c57d2c89044040b7e3247594a346a154ddc3c6c5824e: kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-27766/kubernetes-dashboard-metrics-scraper" id=8d9ea095-e782-4ea4-badb-9368a876f0ed name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 02:35:26 functional-382801 crio[3683]: time="2025-12-19T02:35:26.849288887Z" level=info msg="Starting container: a775927e45d13a16a427c57d2c89044040b7e3247594a346a154ddc3c6c5824e" id=d2d63b1d-3dc1-481b-a3ea-60eb71bafd19 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 02:35:26 functional-382801 crio[3683]: time="2025-12-19T02:35:26.851808689Z" level=info msg="Started container" PID=7837 containerID=a775927e45d13a16a427c57d2c89044040b7e3247594a346a154ddc3c6c5824e description=kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-27766/kubernetes-dashboard-metrics-scraper id=d2d63b1d-3dc1-481b-a3ea-60eb71bafd19 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d09a8afaa2088e62c52cc89a47cea4f1fbdbe579cde58babd1a699513abe842b
	Dec 19 02:35:29 functional-382801 crio[3683]: time="2025-12-19T02:35:29.076264721Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30" id=228ea70e-2de0-4002-9941-28445e6531d7 name=/runtime.v1.ImageService/PullImage
	Dec 19 02:35:29 functional-382801 crio[3683]: time="2025-12-19T02:35:29.076991916Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard-web:1.7.0" id=0d54ebe3-6934-4acf-a6e3-c2c7af7aa828 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 02:35:29 functional-382801 crio[3683]: time="2025-12-19T02:35:29.078807826Z" level=info msg="Pulling image: kong:3.9" id=db2a52d5-e345-4baa-a8d4-168139839ba2 name=/runtime.v1.ImageService/PullImage
	Dec 19 02:35:29 functional-382801 crio[3683]: time="2025-12-19T02:35:29.078936423Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 02:35:29 functional-382801 crio[3683]: time="2025-12-19T02:35:29.079770043Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard-web:1.7.0" id=fb4ded85-8c2e-43ea-8d40-5014938353ec name=/runtime.v1.ImageService/ImageStatus
	Dec 19 02:35:29 functional-382801 crio[3683]: time="2025-12-19T02:35:29.0806541Z" level=info msg="Trying to access \"docker.io/library/kong:3.9\""
	Dec 19 02:35:29 functional-382801 crio[3683]: time="2025-12-19T02:35:29.092554994Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-web-7f7574785f-82nbr/kubernetes-dashboard-web" id=03849166-a853-4210-8b75-f4b353761a0e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 02:35:29 functional-382801 crio[3683]: time="2025-12-19T02:35:29.092717784Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 02:35:29 functional-382801 crio[3683]: time="2025-12-19T02:35:29.102324152Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 02:35:29 functional-382801 crio[3683]: time="2025-12-19T02:35:29.103045303Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 02:35:29 functional-382801 crio[3683]: time="2025-12-19T02:35:29.133070544Z" level=info msg="Created container f19e80056691edb928bf0bd009a29389e7e6502af656fd955e2fefddc4c345e5: kubernetes-dashboard/kubernetes-dashboard-web-7f7574785f-82nbr/kubernetes-dashboard-web" id=03849166-a853-4210-8b75-f4b353761a0e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 02:35:29 functional-382801 crio[3683]: time="2025-12-19T02:35:29.133762988Z" level=info msg="Starting container: f19e80056691edb928bf0bd009a29389e7e6502af656fd955e2fefddc4c345e5" id=d7eff5d2-5788-4868-bc35-f4988a72d398 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 02:35:29 functional-382801 crio[3683]: time="2025-12-19T02:35:29.136148018Z" level=info msg="Started container" PID=8007 containerID=f19e80056691edb928bf0bd009a29389e7e6502af656fd955e2fefddc4c345e5 description=kubernetes-dashboard/kubernetes-dashboard-web-7f7574785f-82nbr/kubernetes-dashboard-web id=d7eff5d2-5788-4868-bc35-f4988a72d398 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cd21e111635a04687e3b58ffb2bdebf2068f1c3745cd64b20e743829324d0bde
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED              STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	f19e80056691e       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               2 seconds ago        Running             kubernetes-dashboard-web               0                   cd21e111635a0       kubernetes-dashboard-web-7f7574785f-82nbr               kubernetes-dashboard
	a775927e45d13       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   4 seconds ago        Running             kubernetes-dashboard-metrics-scraper   0                   d09a8afaa2088       kubernetes-dashboard-metrics-scraper-594bbfb84b-27766   kubernetes-dashboard
	c40e010920f4b       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               5 seconds ago        Running             kubernetes-dashboard-api               0                   c13f6198b1fee       kubernetes-dashboard-api-54c76d8866-jmvvp               kubernetes-dashboard
	4f579c8d723f9       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              7 seconds ago        Running             kubernetes-dashboard-auth              0                   22e96eb5f8676       kubernetes-dashboard-auth-84cbccd86c-42tp5              kubernetes-dashboard
	40fc9264b8ec2       public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036                8 seconds ago        Running             mysql                                  0                   77e10fc0ec8b1       mysql-7d7b65bc95-hvl88                                  default
	f86bbf1da312f       04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5                                                           8 seconds ago        Running             myfrontend                             0                   ae90aa8166fdc       sp-pod                                                  default
	7e7f8bf4d56f5       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                        15 seconds ago       Exited              mount-munger                           0                   bbd6aa027a67e       busybox-mount                                           default
	be4485870bfe2       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                      23 seconds ago       Running             echo-server                            0                   00b9674da2445       hello-node-connect-9f67c86d4-6h5j6                      default
	a8a7db2cf8ff2       public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c                         25 seconds ago       Running             nginx                                  0                   d228260252bdb       nginx-svc                                               default
	c3eed396670e2       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                      28 seconds ago       Running             echo-server                            0                   38faf17e1bae8       hello-node-5758569b79-2mrns                             default
	a7b0071538e93       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                                           53 seconds ago       Running             kube-apiserver                         0                   940aa6e2c3d9a       kube-apiserver-functional-382801                        kube-system
	ab9ed915c3bf4       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                                           53 seconds ago       Running             kube-controller-manager                2                   06d5b1aa98ffc       kube-controller-manager-functional-382801               kube-system
	e4d91d98a5ce2       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                                           About a minute ago   Running             kube-scheduler                         1                   f5e926474ffce       kube-scheduler-functional-382801                        kube-system
	bdb00580ea67d       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                                           About a minute ago   Running             etcd                                   1                   fe5ba65c5909c       etcd-functional-382801                                  kube-system
	e6f5835b7a154       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                                           About a minute ago   Running             kindnet-cni                            1                   ee72329513591       kindnet-z8prk                                           kube-system
	f80c4b7b0eada       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                                           About a minute ago   Running             kube-proxy                             1                   52378f42c583c       kube-proxy-mmq2r                                        kube-system
	735f769aeca68       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                                           About a minute ago   Exited              kube-controller-manager                1                   06d5b1aa98ffc       kube-controller-manager-functional-382801               kube-system
	e9c726e20bfcb       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                           About a minute ago   Running             coredns                                1                   4f289724cf8eb       coredns-7d764666f9-6bwd9                                kube-system
	c973a46375444       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           About a minute ago   Running             storage-provisioner                    1                   c2505854d5ddc       storage-provisioner                                     kube-system
	1b7f2bdb99059       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                           About a minute ago   Exited              coredns                                0                   4f289724cf8eb       coredns-7d764666f9-6bwd9                                kube-system
	8b2558efec2c4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           About a minute ago   Exited              storage-provisioner                    0                   c2505854d5ddc       storage-provisioner                                     kube-system
	ed36f0dc6cb3c       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27                         About a minute ago   Exited              kindnet-cni                            0                   ee72329513591       kindnet-z8prk                                           kube-system
	f4103253a2eb8       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                                           About a minute ago   Exited              kube-proxy                             0                   52378f42c583c       kube-proxy-mmq2r                                        kube-system
	ff21a817d44a5       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                                           2 minutes ago        Exited              etcd                                   0                   fe5ba65c5909c       etcd-functional-382801                                  kube-system
	c3adc8768383e       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                                           2 minutes ago        Exited              kube-scheduler                         0                   f5e926474ffce       kube-scheduler-functional-382801                        kube-system
	
	
	==> coredns [1b7f2bdb990593e56aaa94730ad8d72569174ac44547e4be15024f62cfb9322e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:37311 - 37209 "HINFO IN 3713318318710841845.4451134861727888562. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.479700904s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e9c726e20bfcbe8d0c414378ff2b1260f89857dc3e83965c9d8149e4cdc1e24d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:54181 - 39340 "HINFO IN 8674320848017413087.6583931419532591045. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.083497123s
	
	
	==> describe nodes <==
	Name:               functional-382801
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-382801
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=functional-382801
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T02_33_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 02:33:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-382801
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 02:35:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 02:35:10 +0000   Fri, 19 Dec 2025 02:33:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 02:35:10 +0000   Fri, 19 Dec 2025 02:33:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 02:35:10 +0000   Fri, 19 Dec 2025 02:33:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 02:35:10 +0000   Fri, 19 Dec 2025 02:33:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-382801
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                742e785a-188a-411e-8723-20910f6eb8bd
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-2mrns                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  default                     hello-node-connect-9f67c86d4-6h5j6                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  default                     mysql-7d7b65bc95-hvl88                                   600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     16s
	  default                     nginx-svc                                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  default                     sp-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-7d764666f9-6bwd9                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-functional-382801                                   100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-z8prk                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-functional-382801                         250m (3%)     0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 kube-controller-manager-functional-382801                200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-mmq2r                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-functional-382801                         100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        kubernetes-dashboard-api-54c76d8866-jmvvp                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     13s
	  kubernetes-dashboard        kubernetes-dashboard-auth-84cbccd86c-42tp5               100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     13s
	  kubernetes-dashboard        kubernetes-dashboard-kong-78b7499b45-sgkbr               0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-594bbfb84b-27766    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     13s
	  kubernetes-dashboard        kubernetes-dashboard-web-7f7574785f-82nbr                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1850m (23%)  1800m (22%)
	  memory             1532Mi (4%)  2520Mi (7%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  112s  node-controller  Node functional-382801 event: Registered Node functional-382801 in Controller
	  Normal  RegisteredNode  49s   node-controller  Node functional-382801 event: Registered Node functional-382801 in Controller
	
	
	==> dmesg <==
	[  +0.091115] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025741] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.646270] kauditd_printk_skb: 47 callbacks suppressed
	[Dec19 02:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.041250] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.024871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.022884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +8.127187] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[ +16.382230] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[Dec19 02:28] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	
	
	==> etcd [bdb00580ea67d5467c0207e1d07156cd09c03a359010430924ad7d3818fcb728] <==
	{"level":"info","ts":"2025-12-19T02:34:26.424329Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-19T02:34:26.424301Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-19T02:34:26.424543Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-19T02:34:26.424574Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-19T02:34:26.424739Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-19T02:34:26.424812Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-19T02:34:26.517727Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-19T02:34:26.517785Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-19T02:34:26.517841Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-12-19T02:34:26.517856Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T02:34:26.517877Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2025-12-19T02:34:26.521848Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-12-19T02:34:26.521920Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T02:34:26.521939Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2025-12-19T02:34:26.521948Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-12-19T02:34:26.523196Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-382801 ClientURLs:[https://192.168.49.2:2379]}","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-19T02:34:26.523224Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T02:34:26.523197Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T02:34:26.523410Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T02:34:26.523444Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T02:34:26.524432Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T02:34:26.524515Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T02:34:26.527597Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T02:34:26.528665Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-12-19T02:35:20.826071Z","caller":"traceutil/trace.go:172","msg":"trace[140649711] transaction","detail":"{read_only:false; number_of_response:1; response_revision:928; }","duration":"143.610277ms","start":"2025-12-19T02:35:20.682248Z","end":"2025-12-19T02:35:20.825858Z","steps":["trace[140649711] 'process raft request'  (duration: 57.040399ms)","trace[140649711] 'compare'  (duration: 86.081468ms)"],"step_count":2}
	
	
	==> etcd [ff21a817d44a550bbd06b9862f5095bc6790cdfb7967c06b1696e995412994cd] <==
	{"level":"info","ts":"2025-12-19T02:33:32.025824Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-19T02:33:32.025863Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-19T02:33:32.025976Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-19T02:33:32.026049Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T02:33:32.026058Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T02:33:32.029799Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-12-19T02:33:32.029805Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T02:34:18.612116Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-19T02:34:18.612190Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-382801","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-12-19T02:34:18.612307Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T02:34:25.614024Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T02:34:25.615389Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:34:25.615421Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-12-19T02:34:25.615497Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-19T02:34:25.615541Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-19T02:34:25.615540Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:34:25.615562Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-19T02:34:25.615572Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-19T02:34:25.615614Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:34:25.615633Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-19T02:34:25.615642Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:34:25.617412Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-12-19T02:34:25.617466Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:34:25.617502Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-12-19T02:34:25.617537Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-382801","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 02:35:32 up 18 min,  0 user,  load average: 2.94, 1.80, 0.94
	Linux functional-382801 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e6f5835b7a154209d43214286c3686a15d52af0557f8153c0e4b018ebe3083e5] <==
	I1219 02:34:19.568442       1 main.go:148] setting mtu 1500 for CNI 
	I1219 02:34:19.568465       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 02:34:19.568491       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T02:34:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 02:34:19.839941       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 02:34:19.840002       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 02:34:19.840012       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 02:34:19.840397       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 02:34:20.240152       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 02:34:20.240180       1 metrics.go:72] Registering metrics
	I1219 02:34:20.240262       1 controller.go:711] "Syncing nftables rules"
	I1219 02:34:29.840444       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:34:29.840506       1 main.go:301] handling current node
	I1219 02:34:39.840855       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:34:39.840892       1 main.go:301] handling current node
	I1219 02:34:49.844287       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:34:49.844328       1 main.go:301] handling current node
	I1219 02:34:59.842777       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:34:59.842813       1 main.go:301] handling current node
	I1219 02:35:09.840788       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:35:09.840822       1 main.go:301] handling current node
	I1219 02:35:19.840854       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:35:19.840913       1 main.go:301] handling current node
	I1219 02:35:29.840297       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:35:29.840332       1 main.go:301] handling current node
	
	
	==> kindnet [ed36f0dc6cb3cbd7251e55f612c63a863ee54013167abae29302c58543992a3f] <==
	I1219 02:33:42.677241       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1219 02:33:42.677503       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1219 02:33:42.677622       1 main.go:148] setting mtu 1500 for CNI 
	I1219 02:33:42.677638       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 02:33:42.677659       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T02:33:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 02:33:42.878204       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 02:33:42.878250       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 02:33:42.878262       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 02:33:42.878923       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 02:33:43.231235       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 02:33:43.231274       1 metrics.go:72] Registering metrics
	I1219 02:33:43.231341       1 controller.go:711] "Syncing nftables rules"
	I1219 02:33:52.878294       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:33:52.878374       1 main.go:301] handling current node
	I1219 02:34:02.879460       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:34:02.879503       1 main.go:301] handling current node
	I1219 02:34:12.879801       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1219 02:34:12.879852       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a7b0071538e937f37e8667060b008f6fcbf896864398efa5dc9168597ec5c17b] <==
	I1219 02:34:57.597780       1 alloc.go:329] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.206.47"}
	I1219 02:35:02.340270       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.107.125.4"}
	I1219 02:35:04.545902       1 alloc.go:329] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.66.168"}
	I1219 02:35:07.965881       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.231.170"}
	I1219 02:35:15.478525       1 alloc.go:329] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.99.1.116"}
	I1219 02:35:15.679343       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 02:35:15.696431       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 02:35:15.708022       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 02:35:15.714148       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 02:35:15.721656       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 02:35:15.729580       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 02:35:15.739208       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 02:35:15.745331       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 02:35:15.752947       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 02:35:15.758927       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 02:35:15.766126       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 02:35:15.780245       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 02:35:18.298151       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 02:35:18.570443       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.102.219.216"}
	I1219 02:35:18.583205       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.105.47.20"}
	I1219 02:35:18.585416       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.173.109"}
	I1219 02:35:18.602269       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.106.17.101"}
	I1219 02:35:18.602971       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.101.18.142"}
	E1219 02:35:30.118402       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:44288: use of closed network connection
	E1219 02:35:31.634509       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:44336: use of closed network connection
	
	
	==> kube-controller-manager [735f769aeca68df951a98d7e16cd245e2f1adad75f37c4cd4bcb8b0910506ac0] <==
	I1219 02:34:19.672471       1 serving.go:386] Generated self-signed cert in-memory
	I1219 02:34:19.680033       1 controllermanager.go:189] "Starting" version="v1.35.0-rc.1"
	I1219 02:34:19.680056       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:34:19.681377       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1219 02:34:19.681385       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1219 02:34:19.681570       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1219 02:34:19.681654       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1219 02:34:31.690110       1 controllermanager.go:250] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [ab9ed915c3bf476581929d44e0d224198bc91180547c41261b08775c18f003f1] <==
	I1219 02:34:42.226622       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:42.227541       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:42.229382       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:42.229486       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:42.229681       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:42.229718       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:42.229756       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:42.230364       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:34:42.230555       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:42.230796       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:42.230886       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:42.230910       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:42.230933       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:42.230951       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:42.230979       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:42.231038       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:42.231097       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:42.231166       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:42.231215       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:42.231244       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:42.231501       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:42.327562       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:42.327597       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1219 02:34:42.327604       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1219 02:34:42.330454       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [f4103253a2eb86b061574c896e28915ed410f160d906778df357ad64f11f7d78] <==
	I1219 02:33:41.312668       1 server_linux.go:53] "Using iptables proxy"
	I1219 02:33:41.385263       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:33:41.486978       1 shared_informer.go:377] "Caches are synced"
	I1219 02:33:41.487135       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1219 02:33:41.487342       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:33:41.512129       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 02:33:41.512196       1 server_linux.go:136] "Using iptables Proxier"
	I1219 02:33:41.518141       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:33:41.518503       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 02:33:41.518525       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:33:41.520371       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:33:41.520394       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:33:41.520423       1 config.go:200] "Starting service config controller"
	I1219 02:33:41.520431       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:33:41.520932       1 config.go:309] "Starting node config controller"
	I1219 02:33:41.520955       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:33:41.520965       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 02:33:41.522925       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:33:41.522970       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:33:41.620633       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 02:33:41.620888       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 02:33:41.623431       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [f80c4b7b0eada8e4c82f02f1a6e434bbae2879ee19d21c502a47e381f2af4013] <==
	I1219 02:34:19.404276       1 server_linux.go:53] "Using iptables proxy"
	I1219 02:34:19.474600       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:34:27.274988       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:27.275027       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1219 02:34:27.275136       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:34:27.294583       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 02:34:27.294644       1 server_linux.go:136] "Using iptables Proxier"
	I1219 02:34:27.300046       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:34:27.300447       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 02:34:27.300480       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:34:27.301933       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:34:27.301968       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:34:27.301971       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:34:27.301979       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:34:27.302019       1 config.go:200] "Starting service config controller"
	I1219 02:34:27.302029       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:34:27.302044       1 config.go:309] "Starting node config controller"
	I1219 02:34:27.302054       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:34:27.302061       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 02:34:27.503104       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 02:34:27.902568       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 02:34:28.002549       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c3adc8768383e1d30ec24af1ac532d6e279bbffd5ada8872e3dd9e95c75b4f11] <==
	E1219 02:33:32.910888       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1219 02:33:32.910908       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1219 02:33:32.910946       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1219 02:33:32.910943       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1219 02:33:32.910984       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1219 02:33:32.911059       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1219 02:33:32.911060       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1219 02:33:32.911092       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1219 02:33:32.911407       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1219 02:33:33.756604       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1219 02:33:33.762477       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1219 02:33:33.768069       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1219 02:33:33.916334       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1219 02:33:33.947936       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1219 02:33:33.980042       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1219 02:33:33.996153       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1219 02:33:34.020598       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1219 02:33:34.082719       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	I1219 02:33:34.406472       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:25.722027       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:34:25.722621       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1219 02:34:25.722029       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1219 02:34:25.723033       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1219 02:34:25.723060       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1219 02:34:25.723084       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e4d91d98a5ce265e5d6c2e4c175c8ac423f1bb6b7b8d9129e9749bfe787aabb1] <==
	I1219 02:34:26.743535       1 serving.go:386] Generated self-signed cert in-memory
	I1219 02:34:27.880977       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1219 02:34:27.881007       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1219 02:34:27.883034       1 secure_serving.go:111] Initial population of client CA failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.49.2:8441: connect: connection refused
	I1219 02:34:27.883180       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1219 02:34:27.883201       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 02:34:27.883225       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:34:27.883187       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:34:27.883318       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:34:27.883228       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:34:27.883338       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 02:34:27.883357       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1219 02:34:38.997792       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	I1219 02:34:43.283662       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:45.183858       1 shared_informer.go:377] "Caches are synced"
	I1219 02:34:46.483759       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 19 02:35:19 functional-382801 kubelet[4438]: I1219 02:35:19.770842    4438 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac141591-1a49-46d6-b6c4-8aa347ad9154-pvc-be10f5b8-5c59-4041-bc9f-910287a6d3a3" pod "ac141591-1a49-46d6-b6c4-8aa347ad9154" (UID: "ac141591-1a49-46d6-b6c4-8aa347ad9154"). InnerVolumeSpecName "pvc-be10f5b8-5c59-4041-bc9f-910287a6d3a3". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 19 02:35:19 functional-382801 kubelet[4438]: I1219 02:35:19.772283    4438 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac141591-1a49-46d6-b6c4-8aa347ad9154-kube-api-access-v2gdt" pod "ac141591-1a49-46d6-b6c4-8aa347ad9154" (UID: "ac141591-1a49-46d6-b6c4-8aa347ad9154"). InnerVolumeSpecName "kube-api-access-v2gdt". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 19 02:35:19 functional-382801 kubelet[4438]: E1219 02:35:19.862713    4438 configmap.go:193] Couldn't get configMap kubernetes-dashboard/kong-dbless-config: failed to sync configmap cache: timed out waiting for the condition
	Dec 19 02:35:19 functional-382801 kubelet[4438]: E1219 02:35:19.862874    4438 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bbf4f08d-69f5-438d-a11c-47b69525b304-kong-custom-dbless-config-volume podName:bbf4f08d-69f5-438d-a11c-47b69525b304 nodeName:}" failed. No retries permitted until 2025-12-19 02:35:20.362840757 +0000 UTC m=+43.045510367 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kong-custom-dbless-config-volume" (UniqueName: "kubernetes.io/configmap/bbf4f08d-69f5-438d-a11c-47b69525b304-kong-custom-dbless-config-volume") pod "kubernetes-dashboard-kong-78b7499b45-sgkbr" (UID: "bbf4f08d-69f5-438d-a11c-47b69525b304") : failed to sync configmap cache: timed out waiting for the condition
	Dec 19 02:35:19 functional-382801 kubelet[4438]: I1219 02:35:19.870190    4438 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v2gdt\" (UniqueName: \"kubernetes.io/projected/ac141591-1a49-46d6-b6c4-8aa347ad9154-kube-api-access-v2gdt\") on node \"functional-382801\" DevicePath \"\""
	Dec 19 02:35:19 functional-382801 kubelet[4438]: I1219 02:35:19.870339    4438 reconciler_common.go:299] "Volume detached for volume \"pvc-be10f5b8-5c59-4041-bc9f-910287a6d3a3\" (UniqueName: \"kubernetes.io/host-path/ac141591-1a49-46d6-b6c4-8aa347ad9154-pvc-be10f5b8-5c59-4041-bc9f-910287a6d3a3\") on node \"functional-382801\" DevicePath \"\""
	Dec 19 02:35:20 functional-382801 kubelet[4438]: I1219 02:35:20.575210    4438 scope.go:122] "RemoveContainer" containerID="0a08044bc1735b9d4c1b39e7f76a3daebf9784d73749167865b622b61cddfda3"
	Dec 19 02:35:21 functional-382801 kubelet[4438]: I1219 02:35:21.080333    4438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-be10f5b8-5c59-4041-bc9f-910287a6d3a3\" (UniqueName: \"kubernetes.io/host-path/c0e503e8-b519-4063-adfa-1046116d0dee-pvc-be10f5b8-5c59-4041-bc9f-910287a6d3a3\") pod \"sp-pod\" (UID: \"c0e503e8-b519-4063-adfa-1046116d0dee\") " pod="default/sp-pod"
	Dec 19 02:35:21 functional-382801 kubelet[4438]: I1219 02:35:21.080626    4438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzwhz\" (UniqueName: \"kubernetes.io/projected/c0e503e8-b519-4063-adfa-1046116d0dee-kube-api-access-fzwhz\") pod \"sp-pod\" (UID: \"c0e503e8-b519-4063-adfa-1046116d0dee\") " pod="default/sp-pod"
	Dec 19 02:35:21 functional-382801 kubelet[4438]: I1219 02:35:21.411174    4438 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ac141591-1a49-46d6-b6c4-8aa347ad9154" path="/var/lib/kubelet/pods/ac141591-1a49-46d6-b6c4-8aa347ad9154/volumes"
	Dec 19 02:35:24 functional-382801 kubelet[4438]: I1219 02:35:24.583790    4438 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 02:35:24 functional-382801 kubelet[4438]: I1219 02:35:24.583890    4438 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 02:35:24 functional-382801 kubelet[4438]: I1219 02:35:24.609350    4438 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/mysql-7d7b65bc95-hvl88" podStartSLOduration=1.852010619 podStartE2EDuration="9.60932765s" podCreationTimestamp="2025-12-19 02:35:15 +0000 UTC" firstStartedPulling="2025-12-19 02:35:15.919881343 +0000 UTC m=+38.602550951" lastFinishedPulling="2025-12-19 02:35:23.677198376 +0000 UTC m=+46.359867982" observedRunningTime="2025-12-19 02:35:24.608852778 +0000 UTC m=+47.291522390" watchObservedRunningTime="2025-12-19 02:35:24.60932765 +0000 UTC m=+47.291997262"
	Dec 19 02:35:25 functional-382801 kubelet[4438]: I1219 02:35:25.620523    4438 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-auth-84cbccd86c-42tp5" podStartSLOduration=2.725235819 podStartE2EDuration="7.620501625s" podCreationTimestamp="2025-12-19 02:35:18 +0000 UTC" firstStartedPulling="2025-12-19 02:35:19.687930009 +0000 UTC m=+42.370599603" lastFinishedPulling="2025-12-19 02:35:24.583195742 +0000 UTC m=+47.265865409" observedRunningTime="2025-12-19 02:35:25.620418835 +0000 UTC m=+48.303088520" watchObservedRunningTime="2025-12-19 02:35:25.620501625 +0000 UTC m=+48.303171236"
	Dec 19 02:35:25 functional-382801 kubelet[4438]: I1219 02:35:25.622211    4438 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=5.622161843 podStartE2EDuration="5.622161843s" podCreationTimestamp="2025-12-19 02:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 02:35:24.623071912 +0000 UTC m=+47.305741524" watchObservedRunningTime="2025-12-19 02:35:25.622161843 +0000 UTC m=+48.304831454"
	Dec 19 02:35:25 functional-382801 kubelet[4438]: I1219 02:35:25.954658    4438 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 02:35:25 functional-382801 kubelet[4438]: I1219 02:35:25.954765    4438 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 02:35:26 functional-382801 kubelet[4438]: I1219 02:35:26.631480    4438 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-api-54c76d8866-jmvvp" podStartSLOduration=2.3667002520000002 podStartE2EDuration="8.631460718s" podCreationTimestamp="2025-12-19 02:35:18 +0000 UTC" firstStartedPulling="2025-12-19 02:35:19.689259982 +0000 UTC m=+42.371929575" lastFinishedPulling="2025-12-19 02:35:25.95402043 +0000 UTC m=+48.636690041" observedRunningTime="2025-12-19 02:35:26.631160236 +0000 UTC m=+49.313829870" watchObservedRunningTime="2025-12-19 02:35:26.631460718 +0000 UTC m=+49.314130329"
	Dec 19 02:35:26 functional-382801 kubelet[4438]: I1219 02:35:26.807267    4438 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 02:35:26 functional-382801 kubelet[4438]: I1219 02:35:26.807357    4438 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 02:35:27 functional-382801 kubelet[4438]: E1219 02:35:27.616336    4438 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-27766" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 02:35:27 functional-382801 kubelet[4438]: I1219 02:35:27.630011    4438 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-27766" podStartSLOduration=2.512784582 podStartE2EDuration="9.629988451s" podCreationTimestamp="2025-12-19 02:35:18 +0000 UTC" firstStartedPulling="2025-12-19 02:35:19.689417518 +0000 UTC m=+42.372087110" lastFinishedPulling="2025-12-19 02:35:26.806621363 +0000 UTC m=+49.489290979" observedRunningTime="2025-12-19 02:35:27.629482629 +0000 UTC m=+50.312152240" watchObservedRunningTime="2025-12-19 02:35:27.629988451 +0000 UTC m=+50.312658063"
	Dec 19 02:35:28 functional-382801 kubelet[4438]: E1219 02:35:28.619088    4438 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-27766" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 02:35:29 functional-382801 kubelet[4438]: I1219 02:35:29.079175    4438 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 02:35:29 functional-382801 kubelet[4438]: I1219 02:35:29.079266    4438 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	
	
	==> kubernetes-dashboard [4f579c8d723f9da193120d39c90ff4ae4dcbca4e3cec578a2a80a21522355adb] <==
	I1219 02:35:24.650397       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 02:35:24.650479       1 init.go:49] Using in-cluster config
	I1219 02:35:24.650621       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [a775927e45d13a16a427c57d2c89044040b7e3247594a346a154ddc3c6c5824e] <==
	I1219 02:35:26.865958       1 main.go:43] "Starting Metrics Scraper" version="1.2.2"
	W1219 02:35:26.866023       1 client_config.go:667] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I1219 02:35:26.866138       1 main.go:51] Kubernetes host: https://10.96.0.1:443
	I1219 02:35:26.866142       1 main.go:52] Namespace(s): []
	
	
	==> kubernetes-dashboard [c40e010920f4b23ccbd3f664e3f377712b23f65ac27da2e92da675f74bb88cbc] <==
	I1219 02:35:26.084564       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 02:35:26.084631       1 init.go:49] Using in-cluster config
	I1219 02:35:26.084887       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 02:35:26.084904       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 02:35:26.084910       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 02:35:26.084915       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 02:35:26.090242       1 main.go:119] "Successful initial request to the apiserver" version="v1.35.0-rc.1"
	I1219 02:35:26.090275       1 client.go:265] Creating in-cluster Sidecar client
	I1219 02:35:26.093796       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 02:35:26.097507       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> kubernetes-dashboard [f19e80056691edb928bf0bd009a29389e7e6502af656fd955e2fefddc4c345e5] <==
	I1219 02:35:29.217335       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 02:35:29.217586       1 init.go:48] Using in-cluster config
	I1219 02:35:29.217898       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [8b2558efec2c454f69b3ae354f1879e4995d90fa163a324706872e340721e643] <==
	I1219 02:33:53.813973       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-382801_6a28e90e-a4dd-42b1-af4a-19fba1c94724!
	W1219 02:33:55.724814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:55.729663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:57.732678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:57.736572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:59.740247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:33:59.744195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:01.747052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:01.751740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:03.754804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:03.759683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:05.763099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:05.769228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:07.772112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:07.777053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:09.780658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:09.784600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:11.787852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:11.791553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:13.794674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:13.800272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:15.803565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:15.808535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:17.811525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:34:17.815372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c973a46375444ba08f53da22d06660463f717b9b772b3adb6cd9ecbf2005ed10] <==
	I1219 02:35:10.214994       1 volume_store.go:212] Trying to save persistentvolume "pvc-be10f5b8-5c59-4041-bc9f-910287a6d3a3"
	I1219 02:35:10.222334       1 volume_store.go:219] persistentvolume "pvc-be10f5b8-5c59-4041-bc9f-910287a6d3a3" saved
	I1219 02:35:10.222408       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"be10f5b8-5c59-4041-bc9f-910287a6d3a3", APIVersion:"v1", ResourceVersion:"720", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-be10f5b8-5c59-4041-bc9f-910287a6d3a3
	W1219 02:35:11.229396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:11.233233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:13.236878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:13.241353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:15.245243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:15.252417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:17.256578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:17.260881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:19.265639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:19.270342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:21.274419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:21.282857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:23.286629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:23.291324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:25.294237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:25.335947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:27.341696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:27.348516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:29.351743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:29.355926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:31.360173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:35:31.368597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-382801 -n functional-382801
helpers_test.go:270: (dbg) Run:  kubectl --context functional-382801 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount kubernetes-dashboard-kong-78b7499b45-sgkbr
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-382801 describe pod busybox-mount kubernetes-dashboard-kong-78b7499b45-sgkbr
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-382801 describe pod busybox-mount kubernetes-dashboard-kong-78b7499b45-sgkbr: exit status 1 (71.138811ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-382801/192.168.49.2
	Start Time:       Fri, 19 Dec 2025 02:35:14 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://7e7f8bf4d56f5f451bfb0369be2e4bbc503c709a73e630e9fd2f7ed5ee0b8a92
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 19 Dec 2025 02:35:15 +0000
	      Finished:     Fri, 19 Dec 2025 02:35:16 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8wkrq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-8wkrq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  19s   default-scheduler  Successfully assigned default/busybox-mount to functional-382801
	  Normal  Pulling    19s   kubelet            spec.containers{mount-munger}: Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     18s   kubelet            spec.containers{mount-munger}: Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.283s (1.283s including waiting). Image size: 4631262 bytes.
	  Normal  Created    18s   kubelet            spec.containers{mount-munger}: Container created
	  Normal  Started    18s   kubelet            spec.containers{mount-munger}: Container started

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-kong-78b7499b45-sgkbr" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-382801 describe pod busybox-mount kubernetes-dashboard-kong-78b7499b45-sgkbr: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (19.87s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.05s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-749966 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-749966 --output=json --user=testUser: exit status 80 (2.0542097s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e275e5dc-7330-43c5-bf94-072c9b9e2fb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-749966 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"966bcd2c-5aa6-4e6f-a107-ccfb3de4edea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-19T02:43:59Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"82daa0ed-6633-4310-8bb6-3cf0d7cb863e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-749966 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.05s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.47s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-749966 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-749966 --output=json --user=testUser: exit status 80 (1.464932315s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e65c76e8-b0d5-408e-9ab6-7e697d285d69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-749966 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"236f4c9f-3ba7-45a7-b343-4fa9dddadff5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-19T02:44:00Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"378e3ed7-c2aa-4279-b244-43e292a996df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-749966 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.47s)

                                                
                                    
x
+
TestPause/serial/Pause (6.88s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-211152 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-211152 --alsologtostderr -v=5: exit status 80 (1.843225108s)

                                                
                                                
-- stdout --
	* Pausing node pause-211152 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:57:17.057849  213834 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:57:17.058192  213834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:57:17.058205  213834 out.go:374] Setting ErrFile to fd 2...
	I1219 02:57:17.058212  213834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:57:17.058526  213834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:57:17.058859  213834 out.go:368] Setting JSON to false
	I1219 02:57:17.058884  213834 mustload.go:66] Loading cluster: pause-211152
	I1219 02:57:17.059449  213834 config.go:182] Loaded profile config "pause-211152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:57:17.060115  213834 cli_runner.go:164] Run: docker container inspect pause-211152 --format={{.State.Status}}
	I1219 02:57:17.079305  213834 host.go:66] Checking if "pause-211152" exists ...
	I1219 02:57:17.079628  213834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:57:17.141695  213834 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:81 SystemTime:2025-12-19 02:57:17.129791591 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:57:17.142648  213834 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765965980-22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765965980-22186-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-211152 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1219 02:57:17.147108  213834 out.go:179] * Pausing node pause-211152 ... 
	I1219 02:57:17.148261  213834 host.go:66] Checking if "pause-211152" exists ...
	I1219 02:57:17.148501  213834 ssh_runner.go:195] Run: systemctl --version
	I1219 02:57:17.148545  213834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-211152
	I1219 02:57:17.170810  213834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/pause-211152/id_rsa Username:docker}
	I1219 02:57:17.286561  213834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 02:57:17.299876  213834 pause.go:52] kubelet running: true
	I1219 02:57:17.299973  213834 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1219 02:57:17.449998  213834 cri.go:57] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1219 02:57:17.450116  213834 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1219 02:57:17.524253  213834 cri.go:92] found id: "69d306c1c1f12d041d3d325fe991687c4c32f93d8ce29f396cd6917e07519490"
	I1219 02:57:17.524275  213834 cri.go:92] found id: "a9ef06e0549fe0c824c55a1142a63e1b3abf5f40ab71885c2173c72adba1b207"
	I1219 02:57:17.524279  213834 cri.go:92] found id: "02047f414f2d42bf767be3e09438a965f51890f6f3b2813b11142351f7d514cd"
	I1219 02:57:17.524282  213834 cri.go:92] found id: "673af32a168e774c07c1b59798d285d1b807b06b26669fb132366d72860d131b"
	I1219 02:57:17.524285  213834 cri.go:92] found id: "d78476b58e5cdd9f164b0f0c59c9f3c1c62003d2686a83dcfabc253d17de5158"
	I1219 02:57:17.524293  213834 cri.go:92] found id: "3715e7afbb70c49e2c2f6cda05153ddd43e82667e9d2cd236d4bda8c7d4ee889"
	I1219 02:57:17.524296  213834 cri.go:92] found id: "21c83683c5e2fcc64c9605ffbc692af28bd5647eb5834a78753b0e5f1adb1f1e"
	I1219 02:57:17.524298  213834 cri.go:92] found id: ""
	I1219 02:57:17.524335  213834 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 02:57:17.537655  213834 retry.go:31] will retry after 322.490752ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:57:17Z" level=error msg="open /run/runc: no such file or directory"
	I1219 02:57:17.861178  213834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 02:57:17.875413  213834 pause.go:52] kubelet running: false
	I1219 02:57:17.875474  213834 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1219 02:57:18.013025  213834 cri.go:57] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1219 02:57:18.013102  213834 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1219 02:57:18.100342  213834 cri.go:92] found id: "69d306c1c1f12d041d3d325fe991687c4c32f93d8ce29f396cd6917e07519490"
	I1219 02:57:18.100361  213834 cri.go:92] found id: "a9ef06e0549fe0c824c55a1142a63e1b3abf5f40ab71885c2173c72adba1b207"
	I1219 02:57:18.100367  213834 cri.go:92] found id: "02047f414f2d42bf767be3e09438a965f51890f6f3b2813b11142351f7d514cd"
	I1219 02:57:18.100372  213834 cri.go:92] found id: "673af32a168e774c07c1b59798d285d1b807b06b26669fb132366d72860d131b"
	I1219 02:57:18.100376  213834 cri.go:92] found id: "d78476b58e5cdd9f164b0f0c59c9f3c1c62003d2686a83dcfabc253d17de5158"
	I1219 02:57:18.100381  213834 cri.go:92] found id: "3715e7afbb70c49e2c2f6cda05153ddd43e82667e9d2cd236d4bda8c7d4ee889"
	I1219 02:57:18.100385  213834 cri.go:92] found id: "21c83683c5e2fcc64c9605ffbc692af28bd5647eb5834a78753b0e5f1adb1f1e"
	I1219 02:57:18.100390  213834 cri.go:92] found id: ""
	I1219 02:57:18.100449  213834 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 02:57:18.115301  213834 retry.go:31] will retry after 456.992104ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:57:18Z" level=error msg="open /run/runc: no such file or directory"
	I1219 02:57:18.573003  213834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 02:57:18.587811  213834 pause.go:52] kubelet running: false
	I1219 02:57:18.587876  213834 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1219 02:57:18.731288  213834 cri.go:57] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1219 02:57:18.731400  213834 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1219 02:57:18.809130  213834 cri.go:92] found id: "69d306c1c1f12d041d3d325fe991687c4c32f93d8ce29f396cd6917e07519490"
	I1219 02:57:18.809157  213834 cri.go:92] found id: "a9ef06e0549fe0c824c55a1142a63e1b3abf5f40ab71885c2173c72adba1b207"
	I1219 02:57:18.809162  213834 cri.go:92] found id: "02047f414f2d42bf767be3e09438a965f51890f6f3b2813b11142351f7d514cd"
	I1219 02:57:18.809168  213834 cri.go:92] found id: "673af32a168e774c07c1b59798d285d1b807b06b26669fb132366d72860d131b"
	I1219 02:57:18.809172  213834 cri.go:92] found id: "d78476b58e5cdd9f164b0f0c59c9f3c1c62003d2686a83dcfabc253d17de5158"
	I1219 02:57:18.809177  213834 cri.go:92] found id: "3715e7afbb70c49e2c2f6cda05153ddd43e82667e9d2cd236d4bda8c7d4ee889"
	I1219 02:57:18.809181  213834 cri.go:92] found id: "21c83683c5e2fcc64c9605ffbc692af28bd5647eb5834a78753b0e5f1adb1f1e"
	I1219 02:57:18.809185  213834 cri.go:92] found id: ""
	I1219 02:57:18.809230  213834 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 02:57:18.824264  213834 out.go:203] 
	W1219 02:57:18.825484  213834 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:57:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:57:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 02:57:18.825506  213834 out.go:285] * 
	* 
	W1219 02:57:18.831950  213834 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 02:57:18.833302  213834 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-211152 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-211152
helpers_test.go:244: (dbg) docker inspect pause-211152:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b8bc9a004d2b725ceee47d87e3a73bcefa58c7613ba9650dceee7b7bdaa3ebef",
	        "Created": "2025-12-19T02:56:34.788387799Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 201307,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T02:56:34.840032652Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/b8bc9a004d2b725ceee47d87e3a73bcefa58c7613ba9650dceee7b7bdaa3ebef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b8bc9a004d2b725ceee47d87e3a73bcefa58c7613ba9650dceee7b7bdaa3ebef/hostname",
	        "HostsPath": "/var/lib/docker/containers/b8bc9a004d2b725ceee47d87e3a73bcefa58c7613ba9650dceee7b7bdaa3ebef/hosts",
	        "LogPath": "/var/lib/docker/containers/b8bc9a004d2b725ceee47d87e3a73bcefa58c7613ba9650dceee7b7bdaa3ebef/b8bc9a004d2b725ceee47d87e3a73bcefa58c7613ba9650dceee7b7bdaa3ebef-json.log",
	        "Name": "/pause-211152",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-211152:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-211152",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b8bc9a004d2b725ceee47d87e3a73bcefa58c7613ba9650dceee7b7bdaa3ebef",
	                "LowerDir": "/var/lib/docker/overlay2/1419e581290a987b7860fe2e44d5319b0d68b10f76f318b976831ffccf008092-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1419e581290a987b7860fe2e44d5319b0d68b10f76f318b976831ffccf008092/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1419e581290a987b7860fe2e44d5319b0d68b10f76f318b976831ffccf008092/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1419e581290a987b7860fe2e44d5319b0d68b10f76f318b976831ffccf008092/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-211152",
	                "Source": "/var/lib/docker/volumes/pause-211152/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-211152",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-211152",
	                "name.minikube.sigs.k8s.io": "pause-211152",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "4ad80853a92c74a8fb7d8d72cbe3080c51cf39c02a350bab6b71d126f8f4d51a",
	            "SandboxKey": "/var/run/docker/netns/4ad80853a92c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32983"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-211152": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b1f39185e935b9b2e42855cc34007e854ee0a319f8e2789f551113e1723c7022",
	                    "EndpointID": "edb0382fadade3922a1d198b00ded6dd717ba2a3c87cbfa80e6292c929440e0f",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "3a:1a:72:ef:89:81",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-211152",
	                        "b8bc9a004d2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-211152 -n pause-211152
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-211152 -n pause-211152: exit status 2 (415.681988ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-211152 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-211152 logs -n 25: (1.142114454s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p scheduled-stop-759961 --memory=3072 --driver=docker  --container-runtime=crio                                            │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │ 19 Dec 25 02:54 UTC │
	│ stop    │ -p scheduled-stop-759961 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │                     │
	│ stop    │ -p scheduled-stop-759961 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │                     │
	│ stop    │ -p scheduled-stop-759961 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │                     │
	│ stop    │ -p scheduled-stop-759961 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │                     │
	│ stop    │ -p scheduled-stop-759961 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │                     │
	│ stop    │ -p scheduled-stop-759961 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │                     │
	│ stop    │ -p scheduled-stop-759961 --cancel-scheduled                                                                                 │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │ 19 Dec 25 02:54 UTC │
	│ stop    │ -p scheduled-stop-759961 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:55 UTC │                     │
	│ stop    │ -p scheduled-stop-759961 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:55 UTC │                     │
	│ stop    │ -p scheduled-stop-759961 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:55 UTC │ 19 Dec 25 02:55 UTC │
	│ delete  │ -p scheduled-stop-759961                                                                                                    │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:56 UTC │ 19 Dec 25 02:56 UTC │
	│ start   │ -p insufficient-storage-486590 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio            │ insufficient-storage-486590 │ jenkins │ v1.37.0 │ 19 Dec 25 02:56 UTC │                     │
	│ delete  │ -p insufficient-storage-486590                                                                                              │ insufficient-storage-486590 │ jenkins │ v1.37.0 │ 19 Dec 25 02:56 UTC │ 19 Dec 25 02:56 UTC │
	│ start   │ -p offline-crio-172724 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio           │ offline-crio-172724         │ jenkins │ v1.37.0 │ 19 Dec 25 02:56 UTC │ 19 Dec 25 02:57 UTC │
	│ start   │ -p force-systemd-env-215639 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ force-systemd-env-215639    │ jenkins │ v1.37.0 │ 19 Dec 25 02:56 UTC │ 19 Dec 25 02:56 UTC │
	│ start   │ -p cert-expiration-254196 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-254196      │ jenkins │ v1.37.0 │ 19 Dec 25 02:56 UTC │ 19 Dec 25 02:56 UTC │
	│ start   │ -p pause-211152 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-211152                │ jenkins │ v1.37.0 │ 19 Dec 25 02:56 UTC │ 19 Dec 25 02:57 UTC │
	│ delete  │ -p force-systemd-env-215639                                                                                                 │ force-systemd-env-215639    │ jenkins │ v1.37.0 │ 19 Dec 25 02:56 UTC │ 19 Dec 25 02:56 UTC │
	│ start   │ -p force-systemd-flag-675485 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-675485   │ jenkins │ v1.37.0 │ 19 Dec 25 02:57 UTC │                     │
	│ start   │ -p pause-211152 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-211152                │ jenkins │ v1.37.0 │ 19 Dec 25 02:57 UTC │ 19 Dec 25 02:57 UTC │
	│ delete  │ -p offline-crio-172724                                                                                                      │ offline-crio-172724         │ jenkins │ v1.37.0 │ 19 Dec 25 02:57 UTC │ 19 Dec 25 02:57 UTC │
	│ pause   │ -p pause-211152 --alsologtostderr -v=5                                                                                      │ pause-211152                │ jenkins │ v1.37.0 │ 19 Dec 25 02:57 UTC │                     │
	│ start   │ -p NoKubernetes-148997 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio               │ NoKubernetes-148997         │ jenkins │ v1.37.0 │ 19 Dec 25 02:57 UTC │                     │
	│ start   │ -p NoKubernetes-148997 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                       │ NoKubernetes-148997         │ jenkins │ v1.37.0 │ 19 Dec 25 02:57 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:57:17
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:57:17.895205  214168 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:57:17.895340  214168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:57:17.895350  214168 out.go:374] Setting ErrFile to fd 2...
	I1219 02:57:17.895356  214168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:57:17.895562  214168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:57:17.896033  214168 out.go:368] Setting JSON to false
	I1219 02:57:17.897083  214168 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2389,"bootTime":1766110649,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:57:17.897134  214168 start.go:143] virtualization: kvm guest
	I1219 02:57:17.898913  214168 out.go:179] * [NoKubernetes-148997] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:57:17.900411  214168 notify.go:221] Checking for updates...
	I1219 02:57:17.900422  214168 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:57:17.901909  214168 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:57:17.903196  214168 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 02:57:17.904453  214168 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 02:57:17.905749  214168 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:57:17.906933  214168 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:57:17.912457  214168 config.go:182] Loaded profile config "cert-expiration-254196": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:57:17.912584  214168 config.go:182] Loaded profile config "force-systemd-flag-675485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:57:17.912773  214168 config.go:182] Loaded profile config "pause-211152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:57:17.912896  214168 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:57:17.940686  214168 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:57:17.940802  214168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:57:18.002877  214168 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 02:57:17.99015725 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:57:18.002965  214168 docker.go:319] overlay module found
	I1219 02:57:18.007807  214168 out.go:179] * Using the docker driver based on user configuration
	I1219 02:57:18.009379  214168 start.go:309] selected driver: docker
	I1219 02:57:18.009399  214168 start.go:928] validating driver "docker" against <nil>
	I1219 02:57:18.009414  214168 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:57:18.010231  214168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:57:18.090739  214168 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 02:57:18.074001733 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:57:18.091099  214168 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 02:57:18.091451  214168 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1219 02:57:18.096206  214168 out.go:179] * Using Docker driver with root privileges
	I1219 02:57:18.097235  214168 cni.go:84] Creating CNI manager for ""
	I1219 02:57:18.097320  214168 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 02:57:18.097330  214168 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1219 02:57:18.097424  214168 start.go:353] cluster config:
	{Name:NoKubernetes-148997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:NoKubernetes-148997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:57:18.098588  214168 out.go:179] * Starting "NoKubernetes-148997" primary control-plane node in "NoKubernetes-148997" cluster
	I1219 02:57:18.099863  214168 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 02:57:18.101155  214168 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 02:57:18.102486  214168 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 02:57:18.102529  214168 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 02:57:18.102543  214168 cache.go:65] Caching tarball of preloaded images
	I1219 02:57:18.102570  214168 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 02:57:18.102628  214168 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 02:57:18.102638  214168 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 02:57:18.102769  214168 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/NoKubernetes-148997/config.json ...
	I1219 02:57:18.102790  214168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/NoKubernetes-148997/config.json: {Name:mkea7dabba1a7f2313980dcc6cd1805ab09f7131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:57:18.131193  214168 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 02:57:18.131222  214168 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 02:57:18.131244  214168 cache.go:243] Successfully downloaded all kic artifacts
	I1219 02:57:18.131279  214168 start.go:360] acquireMachinesLock for NoKubernetes-148997: {Name:mk9e8bfe93b8039c01cc85cbc24ce878009bedb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 02:57:18.131385  214168 start.go:364] duration metric: took 82.015µs to acquireMachinesLock for "NoKubernetes-148997"
	I1219 02:57:18.131417  214168 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-148997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:NoKubernetes-148997 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 02:57:18.131502  214168 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.874517671Z" level=info msg="RDT not available in the host system"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.874528078Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.875411578Z" level=info msg="Conmon does support the --sync option"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.875427443Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.875440473Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.876354095Z" level=info msg="Conmon does support the --sync option"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.876369472Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.880233894Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.880252259Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.880768317Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.881154894Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.881214416Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.962805415Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-jnrjd Namespace:kube-system ID:cff0f6cb7779a651b60e5a9271e40d4cb2998b6f0919173b8d82e1c8c58e1421 UID:d24d1284-c2a0-408b-a164-1e568a18f0d1 NetNS:/var/run/netns/1d311123-c62a-4d7e-adb0-20a79d09eafe Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128018}] Aliases:map[]}"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.963069584Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-jnrjd for CNI network kindnet (type=ptp)"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.963589625Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.96361846Z" level=info msg="Starting seccomp notifier watcher"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.963669082Z" level=info msg="Create NRI interface"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.963791869Z" level=info msg="built-in NRI default validator is disabled"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.963808717Z" level=info msg="runtime interface created"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.963822156Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.963830239Z" level=info msg="runtime interface starting up..."
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.963837392Z" level=info msg="starting plugins..."
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.963852043Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.964117708Z" level=info msg="No systemd watchdog enabled"
	Dec 19 02:57:13 pause-211152 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	69d306c1c1f12       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                     11 seconds ago      Running             coredns                   0                   cff0f6cb7779a       coredns-66bc5c9577-jnrjd               kube-system
	a9ef06e0549fe       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   22 seconds ago      Running             kindnet-cni               0                   50231b78c65eb       kindnet-hrl64                          kube-system
	02047f414f2d4       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                     24 seconds ago      Running             kube-proxy                0                   3abdc60a64415       kube-proxy-gq4jq                       kube-system
	673af32a168e7       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                     34 seconds ago      Running             kube-scheduler            0                   57e25cfd7d3b6       kube-scheduler-pause-211152            kube-system
	d78476b58e5cd       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                     34 seconds ago      Running             kube-apiserver            0                   a988cfb3ee133       kube-apiserver-pause-211152            kube-system
	3715e7afbb70c       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                     34 seconds ago      Running             kube-controller-manager   0                   4bf6d33e9749c       kube-controller-manager-pause-211152   kube-system
	21c83683c5e2f       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                     34 seconds ago      Running             etcd                      0                   78e2ee5185673       etcd-pause-211152                      kube-system
	
	
	==> coredns [69d306c1c1f12d041d3d325fe991687c4c32f93d8ce29f396cd6917e07519490] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60444 - 34359 "HINFO IN 1979885502063111897.8945076085577984792. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.038642445s
	
	
	==> describe nodes <==
	Name:               pause-211152
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-211152
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=pause-211152
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T02_56_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 02:56:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-211152
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 02:57:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 02:57:07 +0000   Fri, 19 Dec 2025 02:56:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 02:57:07 +0000   Fri, 19 Dec 2025 02:56:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 02:57:07 +0000   Fri, 19 Dec 2025 02:56:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 02:57:07 +0000   Fri, 19 Dec 2025 02:57:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-211152
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                8f68b359-8880-4916-a92f-ded77b347c4a
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-jnrjd                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-211152                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-hrl64                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-211152             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-pause-211152    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-gq4jq                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-211152             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node pause-211152 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node pause-211152 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node pause-211152 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node pause-211152 event: Registered Node pause-211152 in Controller
	  Normal  NodeReady                13s   kubelet          Node pause-211152 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.091115] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025741] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.646270] kauditd_printk_skb: 47 callbacks suppressed
	[Dec19 02:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.041250] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.024871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.022884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +8.127187] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[ +16.382230] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[Dec19 02:28] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	
	
	==> etcd [21c83683c5e2fcc64c9605ffbc692af28bd5647eb5834a78753b0e5f1adb1f1e] <==
	{"level":"warn","ts":"2025-12-19T02:56:46.274883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.282315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.289331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.296247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.311395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.324353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.330926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.338268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.345349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.352509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.359415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.365993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.373879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.381216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.399857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.410220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.428295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.436433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.447370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.507758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:57:04.433841Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"258.371929ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790777044112832 > lease_revoke:<id:40899b3489ce13f9>","response":"size:28"}
	{"level":"info","ts":"2025-12-19T02:57:04.433943Z","caller":"traceutil/trace.go:172","msg":"trace[312140471] linearizableReadLoop","detail":"{readStateIndex:396; appliedIndex:395; }","duration":"123.66301ms","start":"2025-12-19T02:57:04.310270Z","end":"2025-12-19T02:57:04.433933Z","steps":["trace[312140471] 'read index received'  (duration: 25.579µs)","trace[312140471] 'applied index is now lower than readState.Index'  (duration: 123.63693ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T02:57:04.434076Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.795989ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-211152\" limit:1 ","response":"range_response_count:1 size:5560"}
	{"level":"info","ts":"2025-12-19T02:57:04.434139Z","caller":"traceutil/trace.go:172","msg":"trace[902745371] range","detail":"{range_begin:/registry/minions/pause-211152; range_end:; response_count:1; response_revision:381; }","duration":"123.864499ms","start":"2025-12-19T02:57:04.310259Z","end":"2025-12-19T02:57:04.434123Z","steps":["trace[902745371] 'agreement among raft nodes before linearized reading'  (duration: 123.701861ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:57:04.751311Z","caller":"traceutil/trace.go:172","msg":"trace[398562069] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"222.974474ms","start":"2025-12-19T02:57:04.528322Z","end":"2025-12-19T02:57:04.751297Z","steps":["trace[398562069] 'process raft request'  (duration: 222.836809ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:57:20 up 39 min,  0 user,  load average: 2.73, 1.37, 1.23
	Linux pause-211152 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a9ef06e0549fe0c824c55a1142a63e1b3abf5f40ab71885c2173c72adba1b207] <==
	I1219 02:56:57.271590       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1219 02:56:57.271895       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1219 02:56:57.272029       1 main.go:148] setting mtu 1500 for CNI 
	I1219 02:56:57.272053       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 02:56:57.272078       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T02:56:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 02:56:57.566629       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 02:56:57.566653       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 02:56:57.566665       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 02:56:57.566963       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 02:56:57.967063       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 02:56:57.967094       1 metrics.go:72] Registering metrics
	I1219 02:56:57.967152       1 controller.go:711] "Syncing nftables rules"
	I1219 02:57:07.566879       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 02:57:07.566962       1 main.go:301] handling current node
	I1219 02:57:17.573944       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 02:57:17.573986       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d78476b58e5cdd9f164b0f0c59c9f3c1c62003d2686a83dcfabc253d17de5158] <==
	I1219 02:56:47.272055       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1219 02:56:47.272087       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1219 02:56:47.280412       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1219 02:56:47.280446       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1219 02:56:47.283482       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 02:56:47.283772       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1219 02:56:47.287570       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 02:56:47.288506       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1219 02:56:48.076462       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1219 02:56:48.080241       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1219 02:56:48.080314       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1219 02:56:48.726656       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 02:56:48.775209       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 02:56:48.889641       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1219 02:56:48.897338       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1219 02:56:48.899022       1 controller.go:667] quota admission added evaluator for: endpoints
	I1219 02:56:48.903981       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 02:56:49.105454       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1219 02:56:49.816970       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 02:56:49.838365       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1219 02:56:49.855878       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1219 02:56:54.866078       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 02:56:55.168679       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 02:56:55.176777       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 02:56:55.207823       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [3715e7afbb70c49e2c2f6cda05153ddd43e82667e9d2cd236d4bda8c7d4ee889] <==
	I1219 02:56:54.106381       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1219 02:56:54.106378       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1219 02:56:54.106513       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1219 02:56:54.107638       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 02:56:54.107659       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1219 02:56:54.107757       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 02:56:54.109091       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1219 02:56:54.109123       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1219 02:56:54.109429       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1219 02:56:54.110814       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1219 02:56:54.110847       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1219 02:56:54.110865       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1219 02:56:54.110869       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1219 02:56:54.110877       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1219 02:56:54.112525       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1219 02:56:54.113762       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1219 02:56:54.113806       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1219 02:56:54.113823       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1219 02:56:54.114269       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1219 02:56:54.114961       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 02:56:54.118050       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-211152" podCIDRs=["10.244.0.0/24"]
	I1219 02:56:54.123033       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1219 02:56:54.124101       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 02:56:54.144684       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 02:57:09.058790       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [02047f414f2d42bf767be3e09438a965f51890f6f3b2813b11142351f7d514cd] <==
	I1219 02:56:55.645352       1 server_linux.go:53] "Using iptables proxy"
	I1219 02:56:55.711944       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 02:56:55.812328       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 02:56:55.812369       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1219 02:56:55.812489       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:56:55.837602       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 02:56:55.837658       1 server_linux.go:132] "Using iptables Proxier"
	I1219 02:56:55.843800       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:56:55.844263       1 server.go:527] "Version info" version="v1.34.3"
	I1219 02:56:55.844297       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:56:55.845844       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:56:55.845879       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:56:55.845929       1 config.go:309] "Starting node config controller"
	I1219 02:56:55.845862       1 config.go:200] "Starting service config controller"
	I1219 02:56:55.845947       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:56:55.845949       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:56:55.845955       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 02:56:55.845940       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:56:55.845965       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:56:55.946997       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 02:56:55.947021       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 02:56:55.946999       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [673af32a168e774c07c1b59798d285d1b807b06b26669fb132366d72860d131b] <==
	E1219 02:56:47.192302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1219 02:56:47.194012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 02:56:47.194450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 02:56:47.194958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1219 02:56:47.195105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 02:56:47.195527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 02:56:47.195779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 02:56:47.195946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1219 02:56:47.196026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 02:56:47.196021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1219 02:56:47.196116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1219 02:56:47.196152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 02:56:47.196217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 02:56:47.199055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 02:56:47.200039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 02:56:48.074324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 02:56:48.132074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1219 02:56:48.215487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 02:56:48.260277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 02:56:48.312324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 02:56:48.345785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 02:56:48.382384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 02:56:48.414858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 02:56:48.680489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1219 02:56:51.884014       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 02:56:50 pause-211152 kubelet[1313]: I1219 02:56:50.851376    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-211152" podStartSLOduration=1.8513572360000001 podStartE2EDuration="1.851357236s" podCreationTimestamp="2025-12-19 02:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 02:56:50.82383061 +0000 UTC m=+1.246604810" watchObservedRunningTime="2025-12-19 02:56:50.851357236 +0000 UTC m=+1.274131441"
	Dec 19 02:56:54 pause-211152 kubelet[1313]: I1219 02:56:54.127585    1313 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 19 02:56:54 pause-211152 kubelet[1313]: I1219 02:56:54.128290    1313 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 19 02:56:55 pause-211152 kubelet[1313]: I1219 02:56:55.267367    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa88e2ba-00c3-457e-9a34-ed188c55809e-xtables-lock\") pod \"kube-proxy-gq4jq\" (UID: \"aa88e2ba-00c3-457e-9a34-ed188c55809e\") " pod="kube-system/kube-proxy-gq4jq"
	Dec 19 02:56:55 pause-211152 kubelet[1313]: I1219 02:56:55.267471    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/201c2c67-9f3b-4c3f-8ac3-e9520cea4652-xtables-lock\") pod \"kindnet-hrl64\" (UID: \"201c2c67-9f3b-4c3f-8ac3-e9520cea4652\") " pod="kube-system/kindnet-hrl64"
	Dec 19 02:56:55 pause-211152 kubelet[1313]: I1219 02:56:55.267507    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bhxd\" (UniqueName: \"kubernetes.io/projected/aa88e2ba-00c3-457e-9a34-ed188c55809e-kube-api-access-7bhxd\") pod \"kube-proxy-gq4jq\" (UID: \"aa88e2ba-00c3-457e-9a34-ed188c55809e\") " pod="kube-system/kube-proxy-gq4jq"
	Dec 19 02:56:55 pause-211152 kubelet[1313]: I1219 02:56:55.267531    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/201c2c67-9f3b-4c3f-8ac3-e9520cea4652-cni-cfg\") pod \"kindnet-hrl64\" (UID: \"201c2c67-9f3b-4c3f-8ac3-e9520cea4652\") " pod="kube-system/kindnet-hrl64"
	Dec 19 02:56:55 pause-211152 kubelet[1313]: I1219 02:56:55.267554    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/201c2c67-9f3b-4c3f-8ac3-e9520cea4652-lib-modules\") pod \"kindnet-hrl64\" (UID: \"201c2c67-9f3b-4c3f-8ac3-e9520cea4652\") " pod="kube-system/kindnet-hrl64"
	Dec 19 02:56:55 pause-211152 kubelet[1313]: I1219 02:56:55.267575    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgk66\" (UniqueName: \"kubernetes.io/projected/201c2c67-9f3b-4c3f-8ac3-e9520cea4652-kube-api-access-dgk66\") pod \"kindnet-hrl64\" (UID: \"201c2c67-9f3b-4c3f-8ac3-e9520cea4652\") " pod="kube-system/kindnet-hrl64"
	Dec 19 02:56:55 pause-211152 kubelet[1313]: I1219 02:56:55.267598    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa88e2ba-00c3-457e-9a34-ed188c55809e-kube-proxy\") pod \"kube-proxy-gq4jq\" (UID: \"aa88e2ba-00c3-457e-9a34-ed188c55809e\") " pod="kube-system/kube-proxy-gq4jq"
	Dec 19 02:56:55 pause-211152 kubelet[1313]: I1219 02:56:55.267619    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa88e2ba-00c3-457e-9a34-ed188c55809e-lib-modules\") pod \"kube-proxy-gq4jq\" (UID: \"aa88e2ba-00c3-457e-9a34-ed188c55809e\") " pod="kube-system/kube-proxy-gq4jq"
	Dec 19 02:56:57 pause-211152 kubelet[1313]: I1219 02:56:57.819040    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gq4jq" podStartSLOduration=2.819018365 podStartE2EDuration="2.819018365s" podCreationTimestamp="2025-12-19 02:56:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 02:56:55.818808949 +0000 UTC m=+6.241583159" watchObservedRunningTime="2025-12-19 02:56:57.819018365 +0000 UTC m=+8.241792567"
	Dec 19 02:56:57 pause-211152 kubelet[1313]: I1219 02:56:57.851550    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-hrl64" podStartSLOduration=1.315909333 podStartE2EDuration="2.85153049s" podCreationTimestamp="2025-12-19 02:56:55 +0000 UTC" firstStartedPulling="2025-12-19 02:56:55.543500055 +0000 UTC m=+5.966274241" lastFinishedPulling="2025-12-19 02:56:57.079121202 +0000 UTC m=+7.501895398" observedRunningTime="2025-12-19 02:56:57.819397128 +0000 UTC m=+8.242171326" watchObservedRunningTime="2025-12-19 02:56:57.85153049 +0000 UTC m=+8.274304695"
	Dec 19 02:57:07 pause-211152 kubelet[1313]: I1219 02:57:07.745126    1313 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 19 02:57:07 pause-211152 kubelet[1313]: I1219 02:57:07.860414    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d24d1284-c2a0-408b-a164-1e568a18f0d1-config-volume\") pod \"coredns-66bc5c9577-jnrjd\" (UID: \"d24d1284-c2a0-408b-a164-1e568a18f0d1\") " pod="kube-system/coredns-66bc5c9577-jnrjd"
	Dec 19 02:57:07 pause-211152 kubelet[1313]: I1219 02:57:07.860462    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qchxz\" (UniqueName: \"kubernetes.io/projected/d24d1284-c2a0-408b-a164-1e568a18f0d1-kube-api-access-qchxz\") pod \"coredns-66bc5c9577-jnrjd\" (UID: \"d24d1284-c2a0-408b-a164-1e568a18f0d1\") " pod="kube-system/coredns-66bc5c9577-jnrjd"
	Dec 19 02:57:08 pause-211152 kubelet[1313]: I1219 02:57:08.860213    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jnrjd" podStartSLOduration=13.860179658 podStartE2EDuration="13.860179658s" podCreationTimestamp="2025-12-19 02:56:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 02:57:08.848143998 +0000 UTC m=+19.270918208" watchObservedRunningTime="2025-12-19 02:57:08.860179658 +0000 UTC m=+19.282953863"
	Dec 19 02:57:13 pause-211152 kubelet[1313]: W1219 02:57:13.846119    1313 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 19 02:57:13 pause-211152 kubelet[1313]: E1219 02:57:13.846255    1313 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 19 02:57:13 pause-211152 kubelet[1313]: E1219 02:57:13.846304    1313 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 19 02:57:13 pause-211152 kubelet[1313]: E1219 02:57:13.846322    1313 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 19 02:57:17 pause-211152 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 19 02:57:17 pause-211152 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 19 02:57:17 pause-211152 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 19 02:57:17 pause-211152 systemd[1]: kubelet.service: Consumed 1.218s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-211152 -n pause-211152
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-211152 -n pause-211152: exit status 2 (343.196551ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-211152 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-211152
helpers_test.go:244: (dbg) docker inspect pause-211152:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b8bc9a004d2b725ceee47d87e3a73bcefa58c7613ba9650dceee7b7bdaa3ebef",
	        "Created": "2025-12-19T02:56:34.788387799Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 201307,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T02:56:34.840032652Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/b8bc9a004d2b725ceee47d87e3a73bcefa58c7613ba9650dceee7b7bdaa3ebef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b8bc9a004d2b725ceee47d87e3a73bcefa58c7613ba9650dceee7b7bdaa3ebef/hostname",
	        "HostsPath": "/var/lib/docker/containers/b8bc9a004d2b725ceee47d87e3a73bcefa58c7613ba9650dceee7b7bdaa3ebef/hosts",
	        "LogPath": "/var/lib/docker/containers/b8bc9a004d2b725ceee47d87e3a73bcefa58c7613ba9650dceee7b7bdaa3ebef/b8bc9a004d2b725ceee47d87e3a73bcefa58c7613ba9650dceee7b7bdaa3ebef-json.log",
	        "Name": "/pause-211152",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-211152:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-211152",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b8bc9a004d2b725ceee47d87e3a73bcefa58c7613ba9650dceee7b7bdaa3ebef",
	                "LowerDir": "/var/lib/docker/overlay2/1419e581290a987b7860fe2e44d5319b0d68b10f76f318b976831ffccf008092-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1419e581290a987b7860fe2e44d5319b0d68b10f76f318b976831ffccf008092/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1419e581290a987b7860fe2e44d5319b0d68b10f76f318b976831ffccf008092/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1419e581290a987b7860fe2e44d5319b0d68b10f76f318b976831ffccf008092/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-211152",
	                "Source": "/var/lib/docker/volumes/pause-211152/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-211152",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-211152",
	                "name.minikube.sigs.k8s.io": "pause-211152",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "4ad80853a92c74a8fb7d8d72cbe3080c51cf39c02a350bab6b71d126f8f4d51a",
	            "SandboxKey": "/var/run/docker/netns/4ad80853a92c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32983"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-211152": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b1f39185e935b9b2e42855cc34007e854ee0a319f8e2789f551113e1723c7022",
	                    "EndpointID": "edb0382fadade3922a1d198b00ded6dd717ba2a3c87cbfa80e6292c929440e0f",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "3a:1a:72:ef:89:81",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-211152",
	                        "b8bc9a004d2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-211152 -n pause-211152
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-211152 -n pause-211152: exit status 2 (337.911575ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-211152 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-211152 logs -n 25: (2.184064506s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p scheduled-stop-759961 --memory=3072 --driver=docker  --container-runtime=crio                                            │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │ 19 Dec 25 02:54 UTC │
	│ stop    │ -p scheduled-stop-759961 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │                     │
	│ stop    │ -p scheduled-stop-759961 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │                     │
	│ stop    │ -p scheduled-stop-759961 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │                     │
	│ stop    │ -p scheduled-stop-759961 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │                     │
	│ stop    │ -p scheduled-stop-759961 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │                     │
	│ stop    │ -p scheduled-stop-759961 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │                     │
	│ stop    │ -p scheduled-stop-759961 --cancel-scheduled                                                                                 │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:54 UTC │ 19 Dec 25 02:54 UTC │
	│ stop    │ -p scheduled-stop-759961 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:55 UTC │                     │
	│ stop    │ -p scheduled-stop-759961 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:55 UTC │                     │
	│ stop    │ -p scheduled-stop-759961 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:55 UTC │ 19 Dec 25 02:55 UTC │
	│ delete  │ -p scheduled-stop-759961                                                                                                    │ scheduled-stop-759961       │ jenkins │ v1.37.0 │ 19 Dec 25 02:56 UTC │ 19 Dec 25 02:56 UTC │
	│ start   │ -p insufficient-storage-486590 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio            │ insufficient-storage-486590 │ jenkins │ v1.37.0 │ 19 Dec 25 02:56 UTC │                     │
	│ delete  │ -p insufficient-storage-486590                                                                                              │ insufficient-storage-486590 │ jenkins │ v1.37.0 │ 19 Dec 25 02:56 UTC │ 19 Dec 25 02:56 UTC │
	│ start   │ -p offline-crio-172724 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio           │ offline-crio-172724         │ jenkins │ v1.37.0 │ 19 Dec 25 02:56 UTC │ 19 Dec 25 02:57 UTC │
	│ start   │ -p force-systemd-env-215639 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ force-systemd-env-215639    │ jenkins │ v1.37.0 │ 19 Dec 25 02:56 UTC │ 19 Dec 25 02:56 UTC │
	│ start   │ -p cert-expiration-254196 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-254196      │ jenkins │ v1.37.0 │ 19 Dec 25 02:56 UTC │ 19 Dec 25 02:56 UTC │
	│ start   │ -p pause-211152 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-211152                │ jenkins │ v1.37.0 │ 19 Dec 25 02:56 UTC │ 19 Dec 25 02:57 UTC │
	│ delete  │ -p force-systemd-env-215639                                                                                                 │ force-systemd-env-215639    │ jenkins │ v1.37.0 │ 19 Dec 25 02:56 UTC │ 19 Dec 25 02:56 UTC │
	│ start   │ -p force-systemd-flag-675485 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-675485   │ jenkins │ v1.37.0 │ 19 Dec 25 02:57 UTC │                     │
	│ start   │ -p pause-211152 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-211152                │ jenkins │ v1.37.0 │ 19 Dec 25 02:57 UTC │ 19 Dec 25 02:57 UTC │
	│ delete  │ -p offline-crio-172724                                                                                                      │ offline-crio-172724         │ jenkins │ v1.37.0 │ 19 Dec 25 02:57 UTC │ 19 Dec 25 02:57 UTC │
	│ pause   │ -p pause-211152 --alsologtostderr -v=5                                                                                      │ pause-211152                │ jenkins │ v1.37.0 │ 19 Dec 25 02:57 UTC │                     │
	│ start   │ -p NoKubernetes-148997 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio               │ NoKubernetes-148997         │ jenkins │ v1.37.0 │ 19 Dec 25 02:57 UTC │                     │
	│ start   │ -p NoKubernetes-148997 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                       │ NoKubernetes-148997         │ jenkins │ v1.37.0 │ 19 Dec 25 02:57 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:57:17
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:57:17.895205  214168 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:57:17.895340  214168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:57:17.895350  214168 out.go:374] Setting ErrFile to fd 2...
	I1219 02:57:17.895356  214168 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:57:17.895562  214168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:57:17.896033  214168 out.go:368] Setting JSON to false
	I1219 02:57:17.897083  214168 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2389,"bootTime":1766110649,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:57:17.897134  214168 start.go:143] virtualization: kvm guest
	I1219 02:57:17.898913  214168 out.go:179] * [NoKubernetes-148997] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:57:17.900411  214168 notify.go:221] Checking for updates...
	I1219 02:57:17.900422  214168 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:57:17.901909  214168 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:57:17.903196  214168 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 02:57:17.904453  214168 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 02:57:17.905749  214168 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:57:17.906933  214168 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:57:17.912457  214168 config.go:182] Loaded profile config "cert-expiration-254196": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:57:17.912584  214168 config.go:182] Loaded profile config "force-systemd-flag-675485": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:57:17.912773  214168 config.go:182] Loaded profile config "pause-211152": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:57:17.912896  214168 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:57:17.940686  214168 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:57:17.940802  214168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:57:18.002877  214168 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 02:57:17.99015725 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:57:18.002965  214168 docker.go:319] overlay module found
	I1219 02:57:18.007807  214168 out.go:179] * Using the docker driver based on user configuration
	I1219 02:57:18.009379  214168 start.go:309] selected driver: docker
	I1219 02:57:18.009399  214168 start.go:928] validating driver "docker" against <nil>
	I1219 02:57:18.009414  214168 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:57:18.010231  214168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:57:18.090739  214168 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 02:57:18.074001733 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:57:18.091099  214168 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 02:57:18.091451  214168 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1219 02:57:18.096206  214168 out.go:179] * Using Docker driver with root privileges
	I1219 02:57:18.097235  214168 cni.go:84] Creating CNI manager for ""
	I1219 02:57:18.097320  214168 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 02:57:18.097330  214168 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1219 02:57:18.097424  214168 start.go:353] cluster config:
	{Name:NoKubernetes-148997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:NoKubernetes-148997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:57:18.098588  214168 out.go:179] * Starting "NoKubernetes-148997" primary control-plane node in "NoKubernetes-148997" cluster
	I1219 02:57:18.099863  214168 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 02:57:18.101155  214168 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 02:57:18.102486  214168 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 02:57:18.102529  214168 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 02:57:18.102543  214168 cache.go:65] Caching tarball of preloaded images
	I1219 02:57:18.102570  214168 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 02:57:18.102628  214168 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 02:57:18.102638  214168 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 02:57:18.102769  214168 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/NoKubernetes-148997/config.json ...
	I1219 02:57:18.102790  214168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/NoKubernetes-148997/config.json: {Name:mkea7dabba1a7f2313980dcc6cd1805ab09f7131 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 02:57:18.131193  214168 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 02:57:18.131222  214168 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 02:57:18.131244  214168 cache.go:243] Successfully downloaded all kic artifacts
	I1219 02:57:18.131279  214168 start.go:360] acquireMachinesLock for NoKubernetes-148997: {Name:mk9e8bfe93b8039c01cc85cbc24ce878009bedb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 02:57:18.131385  214168 start.go:364] duration metric: took 82.015µs to acquireMachinesLock for "NoKubernetes-148997"
	I1219 02:57:18.131417  214168 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-148997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:NoKubernetes-148997 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 02:57:18.131502  214168 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.874517671Z" level=info msg="RDT not available in the host system"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.874528078Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.875411578Z" level=info msg="Conmon does support the --sync option"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.875427443Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.875440473Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.876354095Z" level=info msg="Conmon does support the --sync option"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.876369472Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.880233894Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.880252259Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.880768317Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.881154894Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.881214416Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.962805415Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-jnrjd Namespace:kube-system ID:cff0f6cb7779a651b60e5a9271e40d4cb2998b6f0919173b8d82e1c8c58e1421 UID:d24d1284-c2a0-408b-a164-1e568a18f0d1 NetNS:/var/run/netns/1d311123-c62a-4d7e-adb0-20a79d09eafe Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128018}] Aliases:map[]}"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.963069584Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-jnrjd for CNI network kindnet (type=ptp)"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.963589625Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.96361846Z" level=info msg="Starting seccomp notifier watcher"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.963669082Z" level=info msg="Create NRI interface"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.963791869Z" level=info msg="built-in NRI default validator is disabled"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.963808717Z" level=info msg="runtime interface created"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.963822156Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.963830239Z" level=info msg="runtime interface starting up..."
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.963837392Z" level=info msg="starting plugins..."
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.963852043Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 19 02:57:13 pause-211152 crio[2213]: time="2025-12-19T02:57:13.964117708Z" level=info msg="No systemd watchdog enabled"
	Dec 19 02:57:13 pause-211152 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	69d306c1c1f12       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                     14 seconds ago      Running             coredns                   0                   cff0f6cb7779a       coredns-66bc5c9577-jnrjd               kube-system
	a9ef06e0549fe       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   25 seconds ago      Running             kindnet-cni               0                   50231b78c65eb       kindnet-hrl64                          kube-system
	02047f414f2d4       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                     27 seconds ago      Running             kube-proxy                0                   3abdc60a64415       kube-proxy-gq4jq                       kube-system
	673af32a168e7       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                     37 seconds ago      Running             kube-scheduler            0                   57e25cfd7d3b6       kube-scheduler-pause-211152            kube-system
	d78476b58e5cd       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                     37 seconds ago      Running             kube-apiserver            0                   a988cfb3ee133       kube-apiserver-pause-211152            kube-system
	3715e7afbb70c       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                     37 seconds ago      Running             kube-controller-manager   0                   4bf6d33e9749c       kube-controller-manager-pause-211152   kube-system
	21c83683c5e2f       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                     37 seconds ago      Running             etcd                      0                   78e2ee5185673       etcd-pause-211152                      kube-system
	
	
	==> coredns [69d306c1c1f12d041d3d325fe991687c4c32f93d8ce29f396cd6917e07519490] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60444 - 34359 "HINFO IN 1979885502063111897.8945076085577984792. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.038642445s
	
	
	==> describe nodes <==
	Name:               pause-211152
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-211152
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=pause-211152
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T02_56_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 02:56:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-211152
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 02:57:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 02:57:07 +0000   Fri, 19 Dec 2025 02:56:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 02:57:07 +0000   Fri, 19 Dec 2025 02:56:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 02:57:07 +0000   Fri, 19 Dec 2025 02:56:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 02:57:07 +0000   Fri, 19 Dec 2025 02:57:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-211152
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                8f68b359-8880-4916-a92f-ded77b347c4a
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-jnrjd                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-pause-211152                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-hrl64                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-pause-211152             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-pause-211152    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-gq4jq                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-pause-211152             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 34s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s   kubelet          Node pause-211152 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s   kubelet          Node pause-211152 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s   kubelet          Node pause-211152 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node pause-211152 event: Registered Node pause-211152 in Controller
	  Normal  NodeReady                16s   kubelet          Node pause-211152 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.091115] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025741] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.646270] kauditd_printk_skb: 47 callbacks suppressed
	[Dec19 02:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.041250] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.023848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.024871] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +1.022884] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +2.047793] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[  +8.127187] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[ +16.382230] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	[Dec19 02:28] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 62 37 93 21 9b 47 ce e8 b6 43 5f 79 08 00
	
	
	==> etcd [21c83683c5e2fcc64c9605ffbc692af28bd5647eb5834a78753b0e5f1adb1f1e] <==
	{"level":"warn","ts":"2025-12-19T02:56:46.274883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.282315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.289331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.296247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.311395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.324353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.330926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.338268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.345349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.352509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.359415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.365993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.373879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.381216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.399857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.410220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.428295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.436433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.447370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:56:46.507758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:57:04.433841Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"258.371929ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790777044112832 > lease_revoke:<id:40899b3489ce13f9>","response":"size:28"}
	{"level":"info","ts":"2025-12-19T02:57:04.433943Z","caller":"traceutil/trace.go:172","msg":"trace[312140471] linearizableReadLoop","detail":"{readStateIndex:396; appliedIndex:395; }","duration":"123.66301ms","start":"2025-12-19T02:57:04.310270Z","end":"2025-12-19T02:57:04.433933Z","steps":["trace[312140471] 'read index received'  (duration: 25.579µs)","trace[312140471] 'applied index is now lower than readState.Index'  (duration: 123.63693ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T02:57:04.434076Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.795989ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-211152\" limit:1 ","response":"range_response_count:1 size:5560"}
	{"level":"info","ts":"2025-12-19T02:57:04.434139Z","caller":"traceutil/trace.go:172","msg":"trace[902745371] range","detail":"{range_begin:/registry/minions/pause-211152; range_end:; response_count:1; response_revision:381; }","duration":"123.864499ms","start":"2025-12-19T02:57:04.310259Z","end":"2025-12-19T02:57:04.434123Z","steps":["trace[902745371] 'agreement among raft nodes before linearized reading'  (duration: 123.701861ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:57:04.751311Z","caller":"traceutil/trace.go:172","msg":"trace[398562069] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"222.974474ms","start":"2025-12-19T02:57:04.528322Z","end":"2025-12-19T02:57:04.751297Z","steps":["trace[398562069] 'process raft request'  (duration: 222.836809ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:57:23 up 39 min,  0 user,  load average: 2.75, 1.40, 1.24
	Linux pause-211152 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a9ef06e0549fe0c824c55a1142a63e1b3abf5f40ab71885c2173c72adba1b207] <==
	I1219 02:56:57.271590       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1219 02:56:57.271895       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1219 02:56:57.272029       1 main.go:148] setting mtu 1500 for CNI 
	I1219 02:56:57.272053       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 02:56:57.272078       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T02:56:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 02:56:57.566629       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 02:56:57.566653       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 02:56:57.566665       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 02:56:57.566963       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 02:56:57.967063       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 02:56:57.967094       1 metrics.go:72] Registering metrics
	I1219 02:56:57.967152       1 controller.go:711] "Syncing nftables rules"
	I1219 02:57:07.566879       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 02:57:07.566962       1 main.go:301] handling current node
	I1219 02:57:17.573944       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 02:57:17.573986       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d78476b58e5cdd9f164b0f0c59c9f3c1c62003d2686a83dcfabc253d17de5158] <==
	I1219 02:56:47.272055       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1219 02:56:47.272087       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1219 02:56:47.280412       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1219 02:56:47.280446       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1219 02:56:47.283482       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 02:56:47.283772       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1219 02:56:47.287570       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 02:56:47.288506       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1219 02:56:48.076462       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1219 02:56:48.080241       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1219 02:56:48.080314       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1219 02:56:48.726656       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 02:56:48.775209       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 02:56:48.889641       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1219 02:56:48.897338       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1219 02:56:48.899022       1 controller.go:667] quota admission added evaluator for: endpoints
	I1219 02:56:48.903981       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 02:56:49.105454       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1219 02:56:49.816970       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 02:56:49.838365       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1219 02:56:49.855878       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1219 02:56:54.866078       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 02:56:55.168679       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 02:56:55.176777       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 02:56:55.207823       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [3715e7afbb70c49e2c2f6cda05153ddd43e82667e9d2cd236d4bda8c7d4ee889] <==
	I1219 02:56:54.106381       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1219 02:56:54.106378       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1219 02:56:54.106513       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1219 02:56:54.107638       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 02:56:54.107659       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1219 02:56:54.107757       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 02:56:54.109091       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1219 02:56:54.109123       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1219 02:56:54.109429       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1219 02:56:54.110814       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1219 02:56:54.110847       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1219 02:56:54.110865       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1219 02:56:54.110869       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1219 02:56:54.110877       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1219 02:56:54.112525       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1219 02:56:54.113762       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1219 02:56:54.113806       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1219 02:56:54.113823       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1219 02:56:54.114269       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1219 02:56:54.114961       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 02:56:54.118050       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-211152" podCIDRs=["10.244.0.0/24"]
	I1219 02:56:54.123033       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1219 02:56:54.124101       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 02:56:54.144684       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 02:57:09.058790       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [02047f414f2d42bf767be3e09438a965f51890f6f3b2813b11142351f7d514cd] <==
	I1219 02:56:55.645352       1 server_linux.go:53] "Using iptables proxy"
	I1219 02:56:55.711944       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 02:56:55.812328       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 02:56:55.812369       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1219 02:56:55.812489       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:56:55.837602       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 02:56:55.837658       1 server_linux.go:132] "Using iptables Proxier"
	I1219 02:56:55.843800       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:56:55.844263       1 server.go:527] "Version info" version="v1.34.3"
	I1219 02:56:55.844297       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:56:55.845844       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:56:55.845879       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:56:55.845929       1 config.go:309] "Starting node config controller"
	I1219 02:56:55.845862       1 config.go:200] "Starting service config controller"
	I1219 02:56:55.845947       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:56:55.845949       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:56:55.845955       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 02:56:55.845940       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:56:55.845965       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:56:55.946997       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 02:56:55.947021       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 02:56:55.946999       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [673af32a168e774c07c1b59798d285d1b807b06b26669fb132366d72860d131b] <==
	E1219 02:56:47.192302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1219 02:56:47.194012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 02:56:47.194450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 02:56:47.194958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1219 02:56:47.195105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 02:56:47.195527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 02:56:47.195779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 02:56:47.195946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1219 02:56:47.196026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 02:56:47.196021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1219 02:56:47.196116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1219 02:56:47.196152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 02:56:47.196217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 02:56:47.199055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 02:56:47.200039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 02:56:48.074324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 02:56:48.132074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1219 02:56:48.215487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 02:56:48.260277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 02:56:48.312324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 02:56:48.345785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 02:56:48.382384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 02:56:48.414858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 02:56:48.680489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1219 02:56:51.884014       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 02:56:50 pause-211152 kubelet[1313]: I1219 02:56:50.851376    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-211152" podStartSLOduration=1.8513572360000001 podStartE2EDuration="1.851357236s" podCreationTimestamp="2025-12-19 02:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 02:56:50.82383061 +0000 UTC m=+1.246604810" watchObservedRunningTime="2025-12-19 02:56:50.851357236 +0000 UTC m=+1.274131441"
	Dec 19 02:56:54 pause-211152 kubelet[1313]: I1219 02:56:54.127585    1313 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 19 02:56:54 pause-211152 kubelet[1313]: I1219 02:56:54.128290    1313 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 19 02:56:55 pause-211152 kubelet[1313]: I1219 02:56:55.267367    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa88e2ba-00c3-457e-9a34-ed188c55809e-xtables-lock\") pod \"kube-proxy-gq4jq\" (UID: \"aa88e2ba-00c3-457e-9a34-ed188c55809e\") " pod="kube-system/kube-proxy-gq4jq"
	Dec 19 02:56:55 pause-211152 kubelet[1313]: I1219 02:56:55.267471    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/201c2c67-9f3b-4c3f-8ac3-e9520cea4652-xtables-lock\") pod \"kindnet-hrl64\" (UID: \"201c2c67-9f3b-4c3f-8ac3-e9520cea4652\") " pod="kube-system/kindnet-hrl64"
	Dec 19 02:56:55 pause-211152 kubelet[1313]: I1219 02:56:55.267507    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bhxd\" (UniqueName: \"kubernetes.io/projected/aa88e2ba-00c3-457e-9a34-ed188c55809e-kube-api-access-7bhxd\") pod \"kube-proxy-gq4jq\" (UID: \"aa88e2ba-00c3-457e-9a34-ed188c55809e\") " pod="kube-system/kube-proxy-gq4jq"
	Dec 19 02:56:55 pause-211152 kubelet[1313]: I1219 02:56:55.267531    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/201c2c67-9f3b-4c3f-8ac3-e9520cea4652-cni-cfg\") pod \"kindnet-hrl64\" (UID: \"201c2c67-9f3b-4c3f-8ac3-e9520cea4652\") " pod="kube-system/kindnet-hrl64"
	Dec 19 02:56:55 pause-211152 kubelet[1313]: I1219 02:56:55.267554    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/201c2c67-9f3b-4c3f-8ac3-e9520cea4652-lib-modules\") pod \"kindnet-hrl64\" (UID: \"201c2c67-9f3b-4c3f-8ac3-e9520cea4652\") " pod="kube-system/kindnet-hrl64"
	Dec 19 02:56:55 pause-211152 kubelet[1313]: I1219 02:56:55.267575    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgk66\" (UniqueName: \"kubernetes.io/projected/201c2c67-9f3b-4c3f-8ac3-e9520cea4652-kube-api-access-dgk66\") pod \"kindnet-hrl64\" (UID: \"201c2c67-9f3b-4c3f-8ac3-e9520cea4652\") " pod="kube-system/kindnet-hrl64"
	Dec 19 02:56:55 pause-211152 kubelet[1313]: I1219 02:56:55.267598    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa88e2ba-00c3-457e-9a34-ed188c55809e-kube-proxy\") pod \"kube-proxy-gq4jq\" (UID: \"aa88e2ba-00c3-457e-9a34-ed188c55809e\") " pod="kube-system/kube-proxy-gq4jq"
	Dec 19 02:56:55 pause-211152 kubelet[1313]: I1219 02:56:55.267619    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa88e2ba-00c3-457e-9a34-ed188c55809e-lib-modules\") pod \"kube-proxy-gq4jq\" (UID: \"aa88e2ba-00c3-457e-9a34-ed188c55809e\") " pod="kube-system/kube-proxy-gq4jq"
	Dec 19 02:56:57 pause-211152 kubelet[1313]: I1219 02:56:57.819040    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gq4jq" podStartSLOduration=2.819018365 podStartE2EDuration="2.819018365s" podCreationTimestamp="2025-12-19 02:56:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 02:56:55.818808949 +0000 UTC m=+6.241583159" watchObservedRunningTime="2025-12-19 02:56:57.819018365 +0000 UTC m=+8.241792567"
	Dec 19 02:56:57 pause-211152 kubelet[1313]: I1219 02:56:57.851550    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-hrl64" podStartSLOduration=1.315909333 podStartE2EDuration="2.85153049s" podCreationTimestamp="2025-12-19 02:56:55 +0000 UTC" firstStartedPulling="2025-12-19 02:56:55.543500055 +0000 UTC m=+5.966274241" lastFinishedPulling="2025-12-19 02:56:57.079121202 +0000 UTC m=+7.501895398" observedRunningTime="2025-12-19 02:56:57.819397128 +0000 UTC m=+8.242171326" watchObservedRunningTime="2025-12-19 02:56:57.85153049 +0000 UTC m=+8.274304695"
	Dec 19 02:57:07 pause-211152 kubelet[1313]: I1219 02:57:07.745126    1313 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 19 02:57:07 pause-211152 kubelet[1313]: I1219 02:57:07.860414    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d24d1284-c2a0-408b-a164-1e568a18f0d1-config-volume\") pod \"coredns-66bc5c9577-jnrjd\" (UID: \"d24d1284-c2a0-408b-a164-1e568a18f0d1\") " pod="kube-system/coredns-66bc5c9577-jnrjd"
	Dec 19 02:57:07 pause-211152 kubelet[1313]: I1219 02:57:07.860462    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qchxz\" (UniqueName: \"kubernetes.io/projected/d24d1284-c2a0-408b-a164-1e568a18f0d1-kube-api-access-qchxz\") pod \"coredns-66bc5c9577-jnrjd\" (UID: \"d24d1284-c2a0-408b-a164-1e568a18f0d1\") " pod="kube-system/coredns-66bc5c9577-jnrjd"
	Dec 19 02:57:08 pause-211152 kubelet[1313]: I1219 02:57:08.860213    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jnrjd" podStartSLOduration=13.860179658 podStartE2EDuration="13.860179658s" podCreationTimestamp="2025-12-19 02:56:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 02:57:08.848143998 +0000 UTC m=+19.270918208" watchObservedRunningTime="2025-12-19 02:57:08.860179658 +0000 UTC m=+19.282953863"
	Dec 19 02:57:13 pause-211152 kubelet[1313]: W1219 02:57:13.846119    1313 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Dec 19 02:57:13 pause-211152 kubelet[1313]: E1219 02:57:13.846255    1313 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Dec 19 02:57:13 pause-211152 kubelet[1313]: E1219 02:57:13.846304    1313 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 19 02:57:13 pause-211152 kubelet[1313]: E1219 02:57:13.846322    1313 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Dec 19 02:57:17 pause-211152 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 19 02:57:17 pause-211152 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 19 02:57:17 pause-211152 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 19 02:57:17 pause-211152 systemd[1]: kubelet.service: Consumed 1.218s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-211152 -n pause-211152
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-211152 -n pause-211152: exit status 2 (405.703816ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-211152 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-433330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-433330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (832.431311ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:04:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-433330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-433330 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-433330 describe deploy/metrics-server -n kube-system: exit status 1 (75.122902ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-433330 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-433330
helpers_test.go:244: (dbg) docker inspect old-k8s-version-433330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18",
	        "Created": "2025-12-19T03:03:42.290394762Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 313341,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:03:42.344115106Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18/hosts",
	        "LogPath": "/var/lib/docker/containers/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18-json.log",
	        "Name": "/old-k8s-version-433330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-433330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-433330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18",
	                "LowerDir": "/var/lib/docker/overlay2/6032c39fb2e99c6be5b0226eff9b2f93ff205fafe5352aa25ebce819f3079e50-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6032c39fb2e99c6be5b0226eff9b2f93ff205fafe5352aa25ebce819f3079e50/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6032c39fb2e99c6be5b0226eff9b2f93ff205fafe5352aa25ebce819f3079e50/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6032c39fb2e99c6be5b0226eff9b2f93ff205fafe5352aa25ebce819f3079e50/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-433330",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-433330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-433330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-433330",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-433330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "31497b5be211f53e93bf51a75e1acf790d2fabbfecec0c1e8fc9052747b920e3",
	            "SandboxKey": "/var/run/docker/netns/31497b5be211",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-433330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ebf807015d65c8db1230e3a313a61194a5685b902dee458d727805bc340fe33d",
	                    "EndpointID": "5e661491007e312585d4c270e68f29ca196db492856a24b5bb18c54ffb3cddb5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "52:45:99:76:e3:43",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-433330",
	                        "ed00f1899233"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-433330 -n old-k8s-version-433330
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-433330 logs -n 25
E1219 03:04:41.583157    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-433330 logs -n 25: (1.387526252s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                          │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p calico-821749 sudo cat /etc/containerd/config.toml                                                                                                  │ calico-821749          │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl status cri-docker --all --full --no-pager                                                                      │ custom-flannel-821749  │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ ssh     │ -p calico-821749 sudo containerd config dump                                                                                                           │ calico-821749          │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl cat cri-docker --no-pager                                                                                      │ custom-flannel-821749  │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p calico-821749 sudo systemctl status crio --all --full --no-pager                                                                                    │ calico-821749          │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                 │ custom-flannel-821749  │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ ssh     │ -p calico-821749 sudo systemctl cat crio --no-pager                                                                                                    │ calico-821749          │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                           │ custom-flannel-821749  │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p calico-821749 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                          │ calico-821749          │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo cri-dockerd --version                                                                                                    │ custom-flannel-821749  │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p calico-821749 sudo crio config                                                                                                                      │ calico-821749          │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl status containerd --all --full --no-pager                                                                      │ custom-flannel-821749  │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ delete  │ -p calico-821749                                                                                                                                       │ calico-821749          │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl cat containerd --no-pager                                                                                      │ custom-flannel-821749  │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo cat /lib/systemd/system/containerd.service                                                                               │ custom-flannel-821749  │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo cat /etc/containerd/config.toml                                                                                          │ custom-flannel-821749  │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo containerd config dump                                                                                                   │ custom-flannel-821749  │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl status crio --all --full --no-pager                                                                            │ custom-flannel-821749  │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl cat crio --no-pager                                                                                            │ custom-flannel-821749  │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                  │ custom-flannel-821749  │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo crio config                                                                                                              │ custom-flannel-821749  │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ delete  │ -p custom-flannel-821749                                                                                                                               │ custom-flannel-821749  │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3 │ embed-certs-805185     │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-433330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain           │ old-k8s-version-433330 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-278042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                │ no-preload-278042      │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:04:37
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:04:37.307297  330835 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:04:37.307541  330835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:04:37.307548  330835 out.go:374] Setting ErrFile to fd 2...
	I1219 03:04:37.307553  330835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:04:37.307774  330835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:04:37.308317  330835 out.go:368] Setting JSON to false
	I1219 03:04:37.309640  330835 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2828,"bootTime":1766110649,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:04:37.309730  330835 start.go:143] virtualization: kvm guest
	I1219 03:04:37.311887  330835 out.go:179] * [embed-certs-805185] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:04:37.313219  330835 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:04:37.313243  330835 notify.go:221] Checking for updates...
	I1219 03:04:37.315723  330835 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:04:37.316972  330835 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:04:37.318154  330835 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:04:37.323280  330835 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:04:37.325073  330835 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:04:37.327144  330835 config.go:182] Loaded profile config "custom-flannel-821749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:04:37.327303  330835 config.go:182] Loaded profile config "no-preload-278042": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:04:37.327403  330835 config.go:182] Loaded profile config "old-k8s-version-433330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1219 03:04:37.327485  330835 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:04:37.355337  330835 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:04:37.355436  330835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:04:37.419003  330835 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-19 03:04:37.40649361 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:04:37.419117  330835 docker.go:319] overlay module found
	I1219 03:04:37.420731  330835 out.go:179] * Using the docker driver based on user configuration
	I1219 03:04:37.421824  330835 start.go:309] selected driver: docker
	I1219 03:04:37.421844  330835 start.go:928] validating driver "docker" against <nil>
	I1219 03:04:37.421859  330835 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:04:37.422542  330835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:04:37.485869  330835 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-19 03:04:37.474717632 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:04:37.486071  330835 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 03:04:37.486272  330835 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:04:37.487733  330835 out.go:179] * Using Docker driver with root privileges
	I1219 03:04:37.488926  330835 cni.go:84] Creating CNI manager for ""
	I1219 03:04:37.489019  330835 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:04:37.489032  330835 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1219 03:04:37.489101  330835 start.go:353] cluster config:
	{Name:embed-certs-805185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-805185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:37.490545  330835 out.go:179] * Starting "embed-certs-805185" primary control-plane node in "embed-certs-805185" cluster
	I1219 03:04:37.491855  330835 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:04:37.493169  330835 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:04:37.494308  330835 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:04:37.494343  330835 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 03:04:37.494356  330835 cache.go:65] Caching tarball of preloaded images
	I1219 03:04:37.494394  330835 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:04:37.494446  330835 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:04:37.494457  330835 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 03:04:37.494557  330835 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/embed-certs-805185/config.json ...
	I1219 03:04:37.494585  330835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/embed-certs-805185/config.json: {Name:mk044658ea0fcd5226f3f66d0c4f9ec503033a74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:37.517985  330835 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:04:37.518015  330835 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:04:37.518037  330835 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:04:37.518071  330835 start.go:360] acquireMachinesLock for embed-certs-805185: {Name:mke5f1a3f6dc054f812dac49839e280a17cc403b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:04:37.518204  330835 start.go:364] duration metric: took 110.341µs to acquireMachinesLock for "embed-certs-805185"
	I1219 03:04:37.518256  330835 start.go:93] Provisioning new machine with config: &{Name:embed-certs-805185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-805185 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:04:37.518371  330835 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 19 03:04:28 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:28.547849401Z" level=info msg="Starting container: e14efef6ad88a5f18fb207b0c485437b0c1b04aa18bdb8de5810c5f64c15a836" id=45fd7f76-e659-41a6-8ac0-0a866bd8d04d name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:04:28 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:28.55314779Z" level=info msg="Started container" PID=2130 containerID=e14efef6ad88a5f18fb207b0c485437b0c1b04aa18bdb8de5810c5f64c15a836 description=kube-system/coredns-5dd5756b68-vp79f/coredns id=45fd7f76-e659-41a6-8ac0-0a866bd8d04d name=/runtime.v1.RuntimeService/StartContainer sandboxID=6fd8cb69f4b9292ec252dd9e520a874dff5b9d2ffb82cefa68ca8662ebd76e8e
	Dec 19 03:04:31 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:31.316261958Z" level=info msg="Running pod sandbox: default/busybox/POD" id=345337e9-6496-4240-b9d3-6dd422a1e145 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 19 03:04:31 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:31.316345547Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:04:31 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:31.32276655Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2db4f484f265be55ee86c7de622205722df27941744a0e02b5d094056055b1b5 UID:1b41a78a-e73b-4f8e-8857-c9e0e83de64f NetNS:/var/run/netns/1a2d8aff-911e-4059-9076-09777b165ff9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008b0b18}] Aliases:map[]}"
	Dec 19 03:04:31 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:31.322919703Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 19 03:04:31 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:31.336861029Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2db4f484f265be55ee86c7de622205722df27941744a0e02b5d094056055b1b5 UID:1b41a78a-e73b-4f8e-8857-c9e0e83de64f NetNS:/var/run/netns/1a2d8aff-911e-4059-9076-09777b165ff9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008b0b18}] Aliases:map[]}"
	Dec 19 03:04:31 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:31.337068887Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 19 03:04:31 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:31.338240808Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 19 03:04:31 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:31.339500617Z" level=info msg="Ran pod sandbox 2db4f484f265be55ee86c7de622205722df27941744a0e02b5d094056055b1b5 with infra container: default/busybox/POD" id=345337e9-6496-4240-b9d3-6dd422a1e145 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 19 03:04:31 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:31.340960206Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4bc0eb9e-156f-4d2d-b752-6719d2f04bf7 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:04:31 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:31.3411243Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4bc0eb9e-156f-4d2d-b752-6719d2f04bf7 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:04:31 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:31.341179909Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=4bc0eb9e-156f-4d2d-b752-6719d2f04bf7 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:04:31 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:31.343080589Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6cbbca10-0ab6-4442-b37f-1553dd625f8c name=/runtime.v1.ImageService/PullImage
	Dec 19 03:04:31 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:31.344871745Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 19 03:04:32 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:32.73764919Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=6cbbca10-0ab6-4442-b37f-1553dd625f8c name=/runtime.v1.ImageService/PullImage
	Dec 19 03:04:32 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:32.738686405Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=48d94856-9849-4593-994f-57c86c66126f name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:04:32 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:32.740344793Z" level=info msg="Creating container: default/busybox/busybox" id=976c6e3f-cda0-4a77-9a51-7bba6da6bd51 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:04:32 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:32.740491288Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:04:32 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:32.744545907Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:04:32 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:32.745228084Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:04:32 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:32.774166917Z" level=info msg="Created container 1d83bc31b6d9fd219a8e77fbe0b28953e1361e6b35790c54486fbeb805c48796: default/busybox/busybox" id=976c6e3f-cda0-4a77-9a51-7bba6da6bd51 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:04:32 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:32.774994832Z" level=info msg="Starting container: 1d83bc31b6d9fd219a8e77fbe0b28953e1361e6b35790c54486fbeb805c48796" id=1a305215-edd4-4aa6-9549-b8610d156e8a name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:04:32 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:32.77711405Z" level=info msg="Started container" PID=2204 containerID=1d83bc31b6d9fd219a8e77fbe0b28953e1361e6b35790c54486fbeb805c48796 description=default/busybox/busybox id=1a305215-edd4-4aa6-9549-b8610d156e8a name=/runtime.v1.RuntimeService/StartContainer sandboxID=2db4f484f265be55ee86c7de622205722df27941744a0e02b5d094056055b1b5
	Dec 19 03:04:40 old-k8s-version-433330 crio[775]: time="2025-12-19T03:04:40.11269434Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	1d83bc31b6d9f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   2db4f484f265b       busybox                                          default
	e14efef6ad88a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 seconds ago      Running             coredns                   0                   6fd8cb69f4b92       coredns-5dd5756b68-vp79f                         kube-system
	4ae902f4a38e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   35607b3105b06       storage-provisioner                              kube-system
	02e93ab5cb59d       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    24 seconds ago      Running             kindnet-cni               0                   9b989692b4320       kindnet-hm2sz                                    kube-system
	faf512f1708a7       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      26 seconds ago      Running             kube-proxy                0                   7a704173f7c9f       kube-proxy-wdrk8                                 kube-system
	3e22f5235af3d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      44 seconds ago      Running             etcd                      0                   54723ae2d7a57       etcd-old-k8s-version-433330                      kube-system
	b41fd6077632b       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      44 seconds ago      Running             kube-controller-manager   0                   7fe42d00592b0       kube-controller-manager-old-k8s-version-433330   kube-system
	9ff443bf3171e       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      44 seconds ago      Running             kube-apiserver            0                   731eac2407b18       kube-apiserver-old-k8s-version-433330            kube-system
	4622149c190ad       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      44 seconds ago      Running             kube-scheduler            0                   60c092f1bfa37       kube-scheduler-old-k8s-version-433330            kube-system
	
	
	==> coredns [e14efef6ad88a5f18fb207b0c485437b0c1b04aa18bdb8de5810c5f64c15a836] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57939 - 50977 "HINFO IN 2590549269666087280.6590042392198838788. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.037065042s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-433330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-433330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=old-k8s-version-433330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_04_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:03:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-433330
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:04:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:04:34 +0000   Fri, 19 Dec 2025 03:03:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:04:34 +0000   Fri, 19 Dec 2025 03:03:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:04:34 +0000   Fri, 19 Dec 2025 03:03:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:04:34 +0000   Fri, 19 Dec 2025 03:04:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-433330
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                51a7519b-85cf-4ec7-8319-8a51b3632490
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-vp79f                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-old-k8s-version-433330                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         39s
	  kube-system                 kindnet-hm2sz                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-433330             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-433330    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-wdrk8                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-433330             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  Starting                 46s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node old-k8s-version-433330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s                kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s                kubelet          Node old-k8s-version-433330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s                kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node old-k8s-version-433330 event: Registered Node old-k8s-version-433330 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-433330 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [3e22f5235af3d9becbcbddff79c78fdc661607c2505491b1099de2807c47c3ca] <==
	{"level":"info","ts":"2025-12-19T03:03:57.720393Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-12-19T03:03:57.722133Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-19T03:03:57.722309Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-19T03:03:57.722371Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-19T03:03:57.722388Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-19T03:03:57.722436Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-19T03:03:58.209311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-19T03:03:58.209366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-19T03:03:58.209404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-19T03:03:58.209424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-19T03:03:58.209433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-19T03:03:58.209446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-19T03:03:58.209457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-19T03:03:58.210653Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-19T03:03:58.211498Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-433330 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-19T03:03:58.21166Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:03:58.21182Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:03:58.211921Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T03:03:58.21204Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T03:03:58.212626Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-19T03:03:58.213129Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-19T03:03:58.21319Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-19T03:03:58.21525Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-19T03:03:58.215546Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T03:04:40.967121Z","caller":"traceutil/trace.go:171","msg":"trace[2146001563] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"126.647608ms","start":"2025-12-19T03:04:40.840452Z","end":"2025-12-19T03:04:40.9671Z","steps":["trace[2146001563] 'process raft request'  (duration: 126.512894ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:04:42 up 47 min,  0 user,  load average: 8.22, 4.30, 2.60
	Linux old-k8s-version-433330 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [02e93ab5cb59da1109bc35b3c4417d7854aa937d92ec4492ad0e598dc000860f] <==
	I1219 03:04:17.520162       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1219 03:04:17.520450       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1219 03:04:17.520642       1 main.go:148] setting mtu 1500 for CNI 
	I1219 03:04:17.520663       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 03:04:17.520679       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T03:04:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 03:04:17.753966       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 03:04:17.754013       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 03:04:17.754024       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 03:04:17.816585       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 03:04:18.157891       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 03:04:18.157924       1 metrics.go:72] Registering metrics
	I1219 03:04:18.158017       1 controller.go:711] "Syncing nftables rules"
	I1219 03:04:27.760461       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:04:27.760515       1 main.go:301] handling current node
	I1219 03:04:37.756816       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:04:37.756868       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9ff443bf3171e378874271d792deb5c02c1e5ae129bdb9223cf60f3701e9fb6b] <==
	I1219 03:03:59.707144       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1219 03:03:59.707750       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1219 03:03:59.707833       1 shared_informer.go:318] Caches are synced for configmaps
	I1219 03:03:59.708124       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1219 03:03:59.708192       1 aggregator.go:166] initial CRD sync complete...
	I1219 03:03:59.708209       1 autoregister_controller.go:141] Starting autoregister controller
	I1219 03:03:59.708217       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1219 03:03:59.708225       1 cache.go:39] Caches are synced for autoregister controller
	I1219 03:03:59.708735       1 controller.go:624] quota admission added evaluator for: namespaces
	I1219 03:03:59.895765       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1219 03:04:00.614323       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1219 03:04:00.622680       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1219 03:04:00.622711       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1219 03:04:01.068889       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:04:01.101671       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:04:01.219440       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1219 03:04:01.226667       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1219 03:04:01.227975       1 controller.go:624] quota admission added evaluator for: endpoints
	I1219 03:04:01.233095       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:04:01.648474       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1219 03:04:02.948302       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1219 03:04:03.165492       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1219 03:04:03.178061       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1219 03:04:15.310330       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1219 03:04:15.312875       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [b41fd6077632bd59fbcc7e8fbe7126cc19887d027dba7205e4202c61c19e9a92] <==
	I1219 03:04:15.352114       1 taint_manager.go:211] "Sending events to api server"
	I1219 03:04:15.352312       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1219 03:04:15.441750       1 shared_informer.go:318] Caches are synced for resource quota
	I1219 03:04:15.447065       1 shared_informer.go:318] Caches are synced for resource quota
	I1219 03:04:15.507000       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-ggzn2"
	I1219 03:04:15.512473       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vp79f"
	I1219 03:04:15.518008       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="174.853433ms"
	I1219 03:04:15.525271       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.206447ms"
	I1219 03:04:15.525363       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.167µs"
	I1219 03:04:15.526904       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.843µs"
	I1219 03:04:15.752563       1 shared_informer.go:318] Caches are synced for garbage collector
	I1219 03:04:15.752608       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1219 03:04:15.769306       1 shared_informer.go:318] Caches are synced for garbage collector
	I1219 03:04:16.131976       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1219 03:04:16.162835       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-ggzn2"
	I1219 03:04:16.179582       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.334608ms"
	I1219 03:04:16.201127       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.319674ms"
	I1219 03:04:16.224355       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.122852ms"
	I1219 03:04:16.224477       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.271µs"
	I1219 03:04:28.189566       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="151.236µs"
	I1219 03:04:28.211256       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.139µs"
	I1219 03:04:29.081290       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.778µs"
	I1219 03:04:29.102425       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.050742ms"
	I1219 03:04:29.102655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="104.251µs"
	I1219 03:04:30.354044       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [faf512f1708a7701eaecdcf1de636cc2064905f1061f3327c70f7e5efa20c38b] <==
	I1219 03:04:15.775821       1 server_others.go:69] "Using iptables proxy"
	I1219 03:04:15.795564       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1219 03:04:15.827650       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:04:15.830469       1 server_others.go:152] "Using iptables Proxier"
	I1219 03:04:15.830513       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1219 03:04:15.830523       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1219 03:04:15.830565       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1219 03:04:15.830905       1 server.go:846] "Version info" version="v1.28.0"
	I1219 03:04:15.830973       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:04:15.832342       1 config.go:188] "Starting service config controller"
	I1219 03:04:15.832471       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1219 03:04:15.832538       1 config.go:315] "Starting node config controller"
	I1219 03:04:15.833079       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1219 03:04:15.832586       1 config.go:97] "Starting endpoint slice config controller"
	I1219 03:04:15.833144       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1219 03:04:15.933370       1 shared_informer.go:318] Caches are synced for service config
	I1219 03:04:15.933412       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1219 03:04:15.933433       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4622149c190ad7a11a4bc38c5a3177787221493d3e9abcd414874204a3ac9609] <==
	W1219 03:03:59.680480       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1219 03:03:59.680496       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1219 03:03:59.680655       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1219 03:03:59.680682       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1219 03:03:59.681490       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1219 03:03:59.681738       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1219 03:03:59.681774       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1219 03:03:59.681780       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1219 03:03:59.681793       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1219 03:03:59.681796       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1219 03:03:59.681726       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1219 03:03:59.681849       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1219 03:04:00.574275       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1219 03:04:00.574316       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1219 03:04:00.585640       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1219 03:04:00.585685       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1219 03:04:00.685772       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1219 03:04:00.685810       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1219 03:04:00.764547       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1219 03:04:00.764591       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:04:00.766947       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1219 03:04:00.766980       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1219 03:04:00.852778       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1219 03:04:00.852815       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1219 03:04:02.675336       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 19 03:04:15 old-k8s-version-433330 kubelet[1393]: I1219 03:04:15.260156    1393 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 19 03:04:15 old-k8s-version-433330 kubelet[1393]: I1219 03:04:15.329377    1393 topology_manager.go:215] "Topology Admit Handler" podUID="c6df6f60-75af-46bf-9a07-9644745d5f72" podNamespace="kube-system" podName="kindnet-hm2sz"
	Dec 19 03:04:15 old-k8s-version-433330 kubelet[1393]: I1219 03:04:15.331814    1393 topology_manager.go:215] "Topology Admit Handler" podUID="b2738e98-0383-41b2-b183-a13a2a915c6c" podNamespace="kube-system" podName="kube-proxy-wdrk8"
	Dec 19 03:04:15 old-k8s-version-433330 kubelet[1393]: I1219 03:04:15.459619    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c6df6f60-75af-46bf-9a07-9644745d5f72-cni-cfg\") pod \"kindnet-hm2sz\" (UID: \"c6df6f60-75af-46bf-9a07-9644745d5f72\") " pod="kube-system/kindnet-hm2sz"
	Dec 19 03:04:15 old-k8s-version-433330 kubelet[1393]: I1219 03:04:15.459663    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b2738e98-0383-41b2-b183-a13a2a915c6c-kube-proxy\") pod \"kube-proxy-wdrk8\" (UID: \"b2738e98-0383-41b2-b183-a13a2a915c6c\") " pod="kube-system/kube-proxy-wdrk8"
	Dec 19 03:04:15 old-k8s-version-433330 kubelet[1393]: I1219 03:04:15.459685    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w5xr\" (UniqueName: \"kubernetes.io/projected/b2738e98-0383-41b2-b183-a13a2a915c6c-kube-api-access-9w5xr\") pod \"kube-proxy-wdrk8\" (UID: \"b2738e98-0383-41b2-b183-a13a2a915c6c\") " pod="kube-system/kube-proxy-wdrk8"
	Dec 19 03:04:15 old-k8s-version-433330 kubelet[1393]: I1219 03:04:15.459742    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6df6f60-75af-46bf-9a07-9644745d5f72-xtables-lock\") pod \"kindnet-hm2sz\" (UID: \"c6df6f60-75af-46bf-9a07-9644745d5f72\") " pod="kube-system/kindnet-hm2sz"
	Dec 19 03:04:15 old-k8s-version-433330 kubelet[1393]: I1219 03:04:15.459770    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2738e98-0383-41b2-b183-a13a2a915c6c-xtables-lock\") pod \"kube-proxy-wdrk8\" (UID: \"b2738e98-0383-41b2-b183-a13a2a915c6c\") " pod="kube-system/kube-proxy-wdrk8"
	Dec 19 03:04:15 old-k8s-version-433330 kubelet[1393]: I1219 03:04:15.459807    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2738e98-0383-41b2-b183-a13a2a915c6c-lib-modules\") pod \"kube-proxy-wdrk8\" (UID: \"b2738e98-0383-41b2-b183-a13a2a915c6c\") " pod="kube-system/kube-proxy-wdrk8"
	Dec 19 03:04:15 old-k8s-version-433330 kubelet[1393]: I1219 03:04:15.459840    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6df6f60-75af-46bf-9a07-9644745d5f72-lib-modules\") pod \"kindnet-hm2sz\" (UID: \"c6df6f60-75af-46bf-9a07-9644745d5f72\") " pod="kube-system/kindnet-hm2sz"
	Dec 19 03:04:15 old-k8s-version-433330 kubelet[1393]: I1219 03:04:15.459867    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nswrg\" (UniqueName: \"kubernetes.io/projected/c6df6f60-75af-46bf-9a07-9644745d5f72-kube-api-access-nswrg\") pod \"kindnet-hm2sz\" (UID: \"c6df6f60-75af-46bf-9a07-9644745d5f72\") " pod="kube-system/kindnet-hm2sz"
	Dec 19 03:04:16 old-k8s-version-433330 kubelet[1393]: I1219 03:04:16.067721    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wdrk8" podStartSLOduration=1.067647135 podCreationTimestamp="2025-12-19 03:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 03:04:16.067248292 +0000 UTC m=+13.204095153" watchObservedRunningTime="2025-12-19 03:04:16.067647135 +0000 UTC m=+13.204493993"
	Dec 19 03:04:28 old-k8s-version-433330 kubelet[1393]: I1219 03:04:28.162323    1393 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 19 03:04:28 old-k8s-version-433330 kubelet[1393]: I1219 03:04:28.187854    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-hm2sz" podStartSLOduration=11.541964581 podCreationTimestamp="2025-12-19 03:04:15 +0000 UTC" firstStartedPulling="2025-12-19 03:04:15.641221531 +0000 UTC m=+12.778068383" lastFinishedPulling="2025-12-19 03:04:17.287046047 +0000 UTC m=+14.423892904" observedRunningTime="2025-12-19 03:04:18.059935846 +0000 UTC m=+15.196782811" watchObservedRunningTime="2025-12-19 03:04:28.187789102 +0000 UTC m=+25.324635957"
	Dec 19 03:04:28 old-k8s-version-433330 kubelet[1393]: I1219 03:04:28.188170    1393 topology_manager.go:215] "Topology Admit Handler" podUID="0fba7aca-106d-40c8-8651-91680e4fedcc" podNamespace="kube-system" podName="storage-provisioner"
	Dec 19 03:04:28 old-k8s-version-433330 kubelet[1393]: I1219 03:04:28.189413    1393 topology_manager.go:215] "Topology Admit Handler" podUID="9fcc07be-0cde-4964-af90-fb09218728e6" podNamespace="kube-system" podName="coredns-5dd5756b68-vp79f"
	Dec 19 03:04:28 old-k8s-version-433330 kubelet[1393]: I1219 03:04:28.250602    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0fba7aca-106d-40c8-8651-91680e4fedcc-tmp\") pod \"storage-provisioner\" (UID: \"0fba7aca-106d-40c8-8651-91680e4fedcc\") " pod="kube-system/storage-provisioner"
	Dec 19 03:04:28 old-k8s-version-433330 kubelet[1393]: I1219 03:04:28.250645    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhs6h\" (UniqueName: \"kubernetes.io/projected/0fba7aca-106d-40c8-8651-91680e4fedcc-kube-api-access-jhs6h\") pod \"storage-provisioner\" (UID: \"0fba7aca-106d-40c8-8651-91680e4fedcc\") " pod="kube-system/storage-provisioner"
	Dec 19 03:04:28 old-k8s-version-433330 kubelet[1393]: I1219 03:04:28.351107    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9fcc07be-0cde-4964-af90-fb09218728e6-config-volume\") pod \"coredns-5dd5756b68-vp79f\" (UID: \"9fcc07be-0cde-4964-af90-fb09218728e6\") " pod="kube-system/coredns-5dd5756b68-vp79f"
	Dec 19 03:04:28 old-k8s-version-433330 kubelet[1393]: I1219 03:04:28.351394    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2nf6\" (UniqueName: \"kubernetes.io/projected/9fcc07be-0cde-4964-af90-fb09218728e6-kube-api-access-g2nf6\") pod \"coredns-5dd5756b68-vp79f\" (UID: \"9fcc07be-0cde-4964-af90-fb09218728e6\") " pod="kube-system/coredns-5dd5756b68-vp79f"
	Dec 19 03:04:29 old-k8s-version-433330 kubelet[1393]: I1219 03:04:29.080811    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vp79f" podStartSLOduration=14.080751012 podCreationTimestamp="2025-12-19 03:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 03:04:29.08040652 +0000 UTC m=+26.217253380" watchObservedRunningTime="2025-12-19 03:04:29.080751012 +0000 UTC m=+26.217597870"
	Dec 19 03:04:31 old-k8s-version-433330 kubelet[1393]: I1219 03:04:31.013861    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.01380311 podCreationTimestamp="2025-12-19 03:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 03:04:29.105232709 +0000 UTC m=+26.242079566" watchObservedRunningTime="2025-12-19 03:04:31.01380311 +0000 UTC m=+28.150650028"
	Dec 19 03:04:31 old-k8s-version-433330 kubelet[1393]: I1219 03:04:31.014129    1393 topology_manager.go:215] "Topology Admit Handler" podUID="1b41a78a-e73b-4f8e-8857-c9e0e83de64f" podNamespace="default" podName="busybox"
	Dec 19 03:04:31 old-k8s-version-433330 kubelet[1393]: I1219 03:04:31.169441    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfml7\" (UniqueName: \"kubernetes.io/projected/1b41a78a-e73b-4f8e-8857-c9e0e83de64f-kube-api-access-lfml7\") pod \"busybox\" (UID: \"1b41a78a-e73b-4f8e-8857-c9e0e83de64f\") " pod="default/busybox"
	Dec 19 03:04:33 old-k8s-version-433330 kubelet[1393]: I1219 03:04:33.093300    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.696584995 podCreationTimestamp="2025-12-19 03:04:31 +0000 UTC" firstStartedPulling="2025-12-19 03:04:31.341383087 +0000 UTC m=+28.478229937" lastFinishedPulling="2025-12-19 03:04:32.738041838 +0000 UTC m=+29.874888689" observedRunningTime="2025-12-19 03:04:33.093153667 +0000 UTC m=+30.230000525" watchObservedRunningTime="2025-12-19 03:04:33.093243747 +0000 UTC m=+30.230090605"
	
	
	==> storage-provisioner [4ae902f4a38e80b42de3154584fe5fbec4064dbb372a4cdd8027a7183a572a89] <==
	I1219 03:04:28.556348       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1219 03:04:28.567275       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1219 03:04:28.567473       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1219 03:04:28.577035       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1219 03:04:28.577164       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eca1d2cd-fec8-4561-9433-a93751f8f3f7", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-433330_b26b2171-6136-42e6-9cd2-21843d540310 became leader
	I1219 03:04:28.577321       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-433330_b26b2171-6136-42e6-9cd2-21843d540310!
	I1219 03:04:28.678454       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-433330_b26b2171-6136-42e6-9cd2-21843d540310!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-433330 -n old-k8s-version-433330
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-433330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-278042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-278042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (296.152087ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:04:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-278042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-278042 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-278042 describe deploy/metrics-server -n kube-system: exit status 1 (83.774404ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-278042 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-278042
helpers_test.go:244: (dbg) docker inspect no-preload-278042:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35",
	        "Created": "2025-12-19T03:03:43.244016686Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 314104,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:03:43.295946149Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35/hostname",
	        "HostsPath": "/var/lib/docker/containers/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35/hosts",
	        "LogPath": "/var/lib/docker/containers/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35-json.log",
	        "Name": "/no-preload-278042",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-278042:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-278042",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35",
	                "LowerDir": "/var/lib/docker/overlay2/1fa6da707f89a44d8d74b00c5a1e05754941243c2c7fe2df12c57972e765ab6a-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1fa6da707f89a44d8d74b00c5a1e05754941243c2c7fe2df12c57972e765ab6a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1fa6da707f89a44d8d74b00c5a1e05754941243c2c7fe2df12c57972e765ab6a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1fa6da707f89a44d8d74b00c5a1e05754941243c2c7fe2df12c57972e765ab6a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-278042",
	                "Source": "/var/lib/docker/volumes/no-preload-278042/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-278042",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-278042",
	                "name.minikube.sigs.k8s.io": "no-preload-278042",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8bf2c9823741489aa9a0a86648909b7f55da592376c63fcc410e08c7fa28b024",
	            "SandboxKey": "/var/run/docker/netns/8bf2c9823741",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-278042": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "40e663ebb9c92fe8e9b5d1c06f073100d83df79efa76e295e52399b291babbbc",
	                    "EndpointID": "07bb7cd6d13111b7076e3fbd757138482422c336e97c57a23e6629ead0066c32",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "22:9a:b2:e5:58:dc",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-278042",
	                        "c49a965a7d8d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-278042 -n no-preload-278042
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-278042 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-278042 logs -n 25: (1.355975651s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p calico-821749 sudo containerd config dump                                                                                                                             │ calico-821749                │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl cat cri-docker --no-pager                                                                                                        │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p calico-821749 sudo systemctl status crio --all --full --no-pager                                                                                                      │ calico-821749                │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                   │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ ssh     │ -p calico-821749 sudo systemctl cat crio --no-pager                                                                                                                      │ calico-821749                │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                             │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p calico-821749 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ calico-821749                │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo cri-dockerd --version                                                                                                                      │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p calico-821749 sudo crio config                                                                                                                                        │ calico-821749                │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl status containerd --all --full --no-pager                                                                                        │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ delete  │ -p calico-821749                                                                                                                                                         │ calico-821749                │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl cat containerd --no-pager                                                                                                        │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo cat /lib/systemd/system/containerd.service                                                                                                 │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo cat /etc/containerd/config.toml                                                                                                            │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo containerd config dump                                                                                                                     │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl status crio --all --full --no-pager                                                                                              │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl cat crio --no-pager                                                                                                              │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                    │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo crio config                                                                                                                                │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ delete  │ -p custom-flannel-821749                                                                                                                                                 │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                   │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-433330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-278042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ delete  │ -p disable-driver-mounts-507648                                                                                                                                          │ disable-driver-mounts-507648 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3 │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:04:42
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:04:42.077353  332512 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:04:42.077618  332512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:04:42.077628  332512 out.go:374] Setting ErrFile to fd 2...
	I1219 03:04:42.077632  332512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:04:42.077978  332512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:04:42.078517  332512 out.go:368] Setting JSON to false
	I1219 03:04:42.080179  332512 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2833,"bootTime":1766110649,"procs":344,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:04:42.080275  332512 start.go:143] virtualization: kvm guest
	I1219 03:04:42.085848  332512 out.go:179] * [default-k8s-diff-port-717222] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:04:42.087314  332512 notify.go:221] Checking for updates...
	I1219 03:04:42.087332  332512 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:04:42.088979  332512 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:04:42.090607  332512 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:04:42.091907  332512 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:04:42.093336  332512 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:04:42.094461  332512 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:04:42.096281  332512 config.go:182] Loaded profile config "embed-certs-805185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:04:42.096421  332512 config.go:182] Loaded profile config "no-preload-278042": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:04:42.096532  332512 config.go:182] Loaded profile config "old-k8s-version-433330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1219 03:04:42.096653  332512 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:04:42.130958  332512 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:04:42.131119  332512 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:04:42.235211  332512 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-19 03:04:42.221484824 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:04:42.235365  332512 docker.go:319] overlay module found
	I1219 03:04:42.238120  332512 out.go:179] * Using the docker driver based on user configuration
	I1219 03:04:37.521157  330835 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1219 03:04:37.521481  330835 start.go:159] libmachine.API.Create for "embed-certs-805185" (driver="docker")
	I1219 03:04:37.521540  330835 client.go:173] LocalClient.Create starting
	I1219 03:04:37.521626  330835 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem
	I1219 03:04:37.521666  330835 main.go:144] libmachine: Decoding PEM data...
	I1219 03:04:37.521694  330835 main.go:144] libmachine: Parsing certificate...
	I1219 03:04:37.521824  330835 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem
	I1219 03:04:37.521859  330835 main.go:144] libmachine: Decoding PEM data...
	I1219 03:04:37.521877  330835 main.go:144] libmachine: Parsing certificate...
	I1219 03:04:37.522356  330835 cli_runner.go:164] Run: docker network inspect embed-certs-805185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1219 03:04:37.541982  330835 cli_runner.go:211] docker network inspect embed-certs-805185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1219 03:04:37.542062  330835 network_create.go:284] running [docker network inspect embed-certs-805185] to gather additional debugging logs...
	I1219 03:04:37.542085  330835 cli_runner.go:164] Run: docker network inspect embed-certs-805185
	W1219 03:04:37.561107  330835 cli_runner.go:211] docker network inspect embed-certs-805185 returned with exit code 1
	I1219 03:04:37.561136  330835 network_create.go:287] error running [docker network inspect embed-certs-805185]: docker network inspect embed-certs-805185: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-805185 not found
	I1219 03:04:37.561158  330835 network_create.go:289] output of [docker network inspect embed-certs-805185]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-805185 not found
	
	** /stderr **
	I1219 03:04:37.561269  330835 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:04:37.582632  330835 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d70e62b79a31 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:cf:22:72:cb:a0} reservation:<nil>}
	I1219 03:04:37.583609  330835 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-980aea652065 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ba:dd:9c:97:fb:7d} reservation:<nil>}
	I1219 03:04:37.584717  330835 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-42b42f6a5044 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a2:1e:31:1b:21:84} reservation:<nil>}
	I1219 03:04:37.585526  330835 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ebf807015d65 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:f2:b4:a5:63:ab:4c} reservation:<nil>}
	I1219 03:04:37.587847  330835 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016113d0}
	I1219 03:04:37.587883  330835 network_create.go:124] attempt to create docker network embed-certs-805185 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1219 03:04:37.588231  330835 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-805185 embed-certs-805185
	I1219 03:04:37.642831  330835 network_create.go:108] docker network embed-certs-805185 192.168.85.0/24 created
	I1219 03:04:37.642868  330835 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-805185" container
	I1219 03:04:37.642932  330835 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1219 03:04:37.661606  330835 cli_runner.go:164] Run: docker volume create embed-certs-805185 --label name.minikube.sigs.k8s.io=embed-certs-805185 --label created_by.minikube.sigs.k8s.io=true
	I1219 03:04:37.681506  330835 oci.go:103] Successfully created a docker volume embed-certs-805185
	I1219 03:04:37.681604  330835 cli_runner.go:164] Run: docker run --rm --name embed-certs-805185-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-805185 --entrypoint /usr/bin/test -v embed-certs-805185:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1219 03:04:38.129898  330835 oci.go:107] Successfully prepared a docker volume embed-certs-805185
	I1219 03:04:38.129955  330835 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:04:38.129966  330835 kic.go:194] Starting extracting preloaded images to volume ...
	I1219 03:04:38.130030  330835 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-805185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1219 03:04:41.537991  330835 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-805185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.407911619s)
	I1219 03:04:41.538024  330835 kic.go:203] duration metric: took 3.408053917s to extract preloaded images to volume ...
	W1219 03:04:41.538118  330835 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1219 03:04:41.538160  330835 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1219 03:04:41.538222  330835 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1219 03:04:41.611883  330835 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-805185 --name embed-certs-805185 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-805185 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-805185 --network embed-certs-805185 --ip 192.168.85.2 --volume embed-certs-805185:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1219 03:04:41.959172  330835 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Running}}
	I1219 03:04:41.984493  330835 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:04:42.007663  330835 cli_runner.go:164] Run: docker exec embed-certs-805185 stat /var/lib/dpkg/alternatives/iptables
	I1219 03:04:42.069851  330835 oci.go:144] the created container "embed-certs-805185" has a running status.
	I1219 03:04:42.069886  330835 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa...
	I1219 03:04:42.307001  330835 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1219 03:04:42.240674  332512 start.go:309] selected driver: docker
	I1219 03:04:42.240709  332512 start.go:928] validating driver "docker" against <nil>
	I1219 03:04:42.240726  332512 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:04:42.241612  332512 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:04:42.341766  332512 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-19 03:04:42.327020037 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:04:42.341989  332512 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 03:04:42.342332  332512 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:04:42.344274  332512 out.go:179] * Using Docker driver with root privileges
	I1219 03:04:42.345661  332512 cni.go:84] Creating CNI manager for ""
	I1219 03:04:42.345832  332512 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:04:42.345851  332512 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1219 03:04:42.345939  332512 start.go:353] cluster config:
	{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:04:42.348437  332512 out.go:179] * Starting "default-k8s-diff-port-717222" primary control-plane node in "default-k8s-diff-port-717222" cluster
	I1219 03:04:42.350220  332512 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:04:42.351538  332512 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:04:42.355081  332512 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:04:42.355147  332512 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 03:04:42.355165  332512 cache.go:65] Caching tarball of preloaded images
	I1219 03:04:42.355233  332512 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:04:42.355309  332512 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:04:42.355324  332512 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 03:04:42.355515  332512 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json ...
	I1219 03:04:42.355553  332512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json: {Name:mk3017fca1963db84db9f74d2af5a1af6cd060e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:04:42.395678  332512 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:04:42.395868  332512 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:04:42.395900  332512 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:04:42.395960  332512 start.go:360] acquireMachinesLock for default-k8s-diff-port-717222: {Name:mk586c44d14a94c58ce6de7426a02f7319eb0716 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:04:42.396133  332512 start.go:364] duration metric: took 148.037µs to acquireMachinesLock for "default-k8s-diff-port-717222"
	I1219 03:04:42.396194  332512 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:04:42.396322  332512 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 19 03:04:30 no-preload-278042 crio[769]: time="2025-12-19T03:04:30.54789267Z" level=info msg="Starting container: 3f1549cfc91ad050e1a12aecbf835e76abff0f2c258d6262870dbc30e3276208" id=13010484-57b3-4ad6-bf5d-1b4691c6bb04 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:04:30 no-preload-278042 crio[769]: time="2025-12-19T03:04:30.550884662Z" level=info msg="Started container" PID=2818 containerID=3f1549cfc91ad050e1a12aecbf835e76abff0f2c258d6262870dbc30e3276208 description=kube-system/coredns-7d764666f9-vj7lm/coredns id=13010484-57b3-4ad6-bf5d-1b4691c6bb04 name=/runtime.v1.RuntimeService/StartContainer sandboxID=43ad18e3b91d77e0e875474f8853060b2960a087bf04eb8beed8fcc2c8e502e4
	Dec 19 03:04:33 no-preload-278042 crio[769]: time="2025-12-19T03:04:33.639244548Z" level=info msg="Running pod sandbox: default/busybox/POD" id=055d284a-40a0-4e62-b0ce-aa2d752377ed name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 19 03:04:33 no-preload-278042 crio[769]: time="2025-12-19T03:04:33.639349434Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:04:33 no-preload-278042 crio[769]: time="2025-12-19T03:04:33.645825735Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b26594e6f9ea9d0880f800c899115989cce57cbaf43b0cb051a7e09665a69770 UID:63c824bf-6272-44c8-8874-48b3d0245b2f NetNS:/var/run/netns/98f5a11b-5ba4-4c6a-81ac-4171b64ccc18 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b400}] Aliases:map[]}"
	Dec 19 03:04:33 no-preload-278042 crio[769]: time="2025-12-19T03:04:33.645869056Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 19 03:04:33 no-preload-278042 crio[769]: time="2025-12-19T03:04:33.657994948Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b26594e6f9ea9d0880f800c899115989cce57cbaf43b0cb051a7e09665a69770 UID:63c824bf-6272-44c8-8874-48b3d0245b2f NetNS:/var/run/netns/98f5a11b-5ba4-4c6a-81ac-4171b64ccc18 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b400}] Aliases:map[]}"
	Dec 19 03:04:33 no-preload-278042 crio[769]: time="2025-12-19T03:04:33.658169604Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 19 03:04:33 no-preload-278042 crio[769]: time="2025-12-19T03:04:33.659243119Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 19 03:04:33 no-preload-278042 crio[769]: time="2025-12-19T03:04:33.660204457Z" level=info msg="Ran pod sandbox b26594e6f9ea9d0880f800c899115989cce57cbaf43b0cb051a7e09665a69770 with infra container: default/busybox/POD" id=055d284a-40a0-4e62-b0ce-aa2d752377ed name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 19 03:04:33 no-preload-278042 crio[769]: time="2025-12-19T03:04:33.66162471Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f31768cf-f22a-432c-9290-ef137c1445b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:04:33 no-preload-278042 crio[769]: time="2025-12-19T03:04:33.661837989Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f31768cf-f22a-432c-9290-ef137c1445b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:04:33 no-preload-278042 crio[769]: time="2025-12-19T03:04:33.661875089Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=f31768cf-f22a-432c-9290-ef137c1445b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:04:33 no-preload-278042 crio[769]: time="2025-12-19T03:04:33.662738167Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a99148eb-782a-4053-91e7-3ff0e985997c name=/runtime.v1.ImageService/PullImage
	Dec 19 03:04:33 no-preload-278042 crio[769]: time="2025-12-19T03:04:33.664266781Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 19 03:04:35 no-preload-278042 crio[769]: time="2025-12-19T03:04:35.058821059Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=a99148eb-782a-4053-91e7-3ff0e985997c name=/runtime.v1.ImageService/PullImage
	Dec 19 03:04:35 no-preload-278042 crio[769]: time="2025-12-19T03:04:35.059427731Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=adb71103-0d4a-4b6a-8306-f59b8e38ec92 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:04:35 no-preload-278042 crio[769]: time="2025-12-19T03:04:35.061171679Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ed5d0285-87cb-4f89-adc8-66cd06c7eabd name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:04:35 no-preload-278042 crio[769]: time="2025-12-19T03:04:35.064793991Z" level=info msg="Creating container: default/busybox/busybox" id=0aa80dc5-07d9-4c35-9df2-62a23b505805 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:04:35 no-preload-278042 crio[769]: time="2025-12-19T03:04:35.064903605Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:04:35 no-preload-278042 crio[769]: time="2025-12-19T03:04:35.068932809Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:04:35 no-preload-278042 crio[769]: time="2025-12-19T03:04:35.069343883Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:04:35 no-preload-278042 crio[769]: time="2025-12-19T03:04:35.103214192Z" level=info msg="Created container 43c777ad6e2ea840e400c576db5d742a36b9ff460456583d47633546ca087f13: default/busybox/busybox" id=0aa80dc5-07d9-4c35-9df2-62a23b505805 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:04:35 no-preload-278042 crio[769]: time="2025-12-19T03:04:35.103773031Z" level=info msg="Starting container: 43c777ad6e2ea840e400c576db5d742a36b9ff460456583d47633546ca087f13" id=7338c845-5c8f-4998-ab9c-5955cf00b417 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:04:35 no-preload-278042 crio[769]: time="2025-12-19T03:04:35.105454525Z" level=info msg="Started container" PID=2890 containerID=43c777ad6e2ea840e400c576db5d742a36b9ff460456583d47633546ca087f13 description=default/busybox/busybox id=7338c845-5c8f-4998-ab9c-5955cf00b417 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b26594e6f9ea9d0880f800c899115989cce57cbaf43b0cb051a7e09665a69770
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	43c777ad6e2ea       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   b26594e6f9ea9       busybox                                     default
	3f1549cfc91ad       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      12 seconds ago      Running             coredns                   0                   43ad18e3b91d7       coredns-7d764666f9-vj7lm                    kube-system
	d1988c2a17516       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   d95f48beac35b       storage-provisioner                         kube-system
	f80c44d10f9e6       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    23 seconds ago      Running             kindnet-cni               0                   9b6290e90596d       kindnet-xrp2s                               kube-system
	20a1575fb3957       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                      25 seconds ago      Running             kube-proxy                0                   57381b8ee7b9b       kube-proxy-g2gm4                            kube-system
	d812fab0c3962       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                      35 seconds ago      Running             kube-controller-manager   0                   aca774b7b6847       kube-controller-manager-no-preload-278042   kube-system
	0199947779b16       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                      35 seconds ago      Running             kube-scheduler            0                   58cb7d45aeaac       kube-scheduler-no-preload-278042            kube-system
	13d5db15b9cb0       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                      35 seconds ago      Running             kube-apiserver            0                   c8472188123dd       kube-apiserver-no-preload-278042            kube-system
	8ff908e95b9e9       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      35 seconds ago      Running             etcd                      0                   1fd687c8bb511       etcd-no-preload-278042                      kube-system
	
	
	==> coredns [3f1549cfc91ad050e1a12aecbf835e76abff0f2c258d6262870dbc30e3276208] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:34431 - 39616 "HINFO IN 3350393017169462642.8442482271561188258. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022687485s
	
	
	==> describe nodes <==
	Name:               no-preload-278042
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-278042
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=no-preload-278042
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_04_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:04:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-278042
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:04:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:04:42 +0000   Fri, 19 Dec 2025 03:04:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:04:42 +0000   Fri, 19 Dec 2025 03:04:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:04:42 +0000   Fri, 19 Dec 2025 03:04:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:04:42 +0000   Fri, 19 Dec 2025 03:04:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-278042
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                8fbc19b8-72f7-4938-83d9-fc3015dde7d1
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-vj7lm                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-no-preload-278042                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-xrp2s                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-278042             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-278042    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-g2gm4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-278042             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node no-preload-278042 event: Registered Node no-preload-278042 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [8ff908e95b9e9db22f688ee866693b37920d55b9200becd323cd505ef6951ba5] <==
	{"level":"info","ts":"2025-12-19T03:04:08.526768Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-19T03:04:08.526860Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-19T03:04:08.526906Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-12-19T03:04:08.526920Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:04:08.526933Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-12-19T03:04:08.583952Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-19T03:04:08.584000Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:04:08.584021Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-12-19T03:04:08.584034Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-19T03:04:08.585426Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:no-preload-278042 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-19T03:04:08.585441Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:04:08.585436Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-19T03:04:08.585471Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:04:08.585727Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T03:04:08.585754Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T03:04:08.586853Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:04:08.588696Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:04:08.590337Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T03:04:08.591387Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-19T03:04:08.651153Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-19T03:04:08.659419Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-19T03:04:08.659726Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-19T03:04:08.659849Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-19T03:04:08.659958Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-19T03:04:11.228137Z","caller":"traceutil/trace.go:172","msg":"trace[684528789] transaction","detail":"{read_only:false; response_revision:157; number_of_response:1; }","duration":"102.255903ms","start":"2025-12-19T03:04:11.125856Z","end":"2025-12-19T03:04:11.228112Z","steps":["trace[684528789] 'process raft request'  (duration: 60.087739ms)","trace[684528789] 'compare'  (duration: 42.017265ms)"],"step_count":2}
	
	
	==> kernel <==
	 03:04:43 up 47 min,  0 user,  load average: 8.22, 4.30, 2.60
	Linux no-preload-278042 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f80c44d10f9e67996c4f989847683ec7fb4a4d797112e25cbda2e8650a6421e8] <==
	I1219 03:04:19.671941       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1219 03:04:19.672219       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1219 03:04:19.672365       1 main.go:148] setting mtu 1500 for CNI 
	I1219 03:04:19.672383       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 03:04:19.672400       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T03:04:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 03:04:19.943287       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 03:04:19.943338       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 03:04:19.943351       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 03:04:19.943495       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 03:04:20.443776       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 03:04:20.443815       1 metrics.go:72] Registering metrics
	I1219 03:04:20.444020       1 controller.go:711] "Syncing nftables rules"
	I1219 03:04:29.881065       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:04:29.881131       1 main.go:301] handling current node
	I1219 03:04:39.876460       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:04:39.876513       1 main.go:301] handling current node
	
	
	==> kube-apiserver [13d5db15b9cb023c31bd4527a3e0d0a1188d8417585899f340d6d22935d91b99] <==
	E1219 03:04:09.681665       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1219 03:04:09.723303       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1219 03:04:09.768249       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 03:04:09.774279       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:04:09.774401       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1219 03:04:09.781234       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:04:09.886977       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1219 03:04:10.570787       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1219 03:04:10.574212       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1219 03:04:10.574286       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1219 03:04:11.348063       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:04:11.380620       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:04:11.476331       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1219 03:04:11.481913       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1219 03:04:11.482999       1 controller.go:667] quota admission added evaluator for: endpoints
	I1219 03:04:11.487475       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:04:11.628835       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1219 03:04:12.335091       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 03:04:12.343655       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1219 03:04:12.351385       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1219 03:04:17.281951       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:04:17.287231       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:04:17.429041       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:04:17.579664       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1219 03:04:41.444647       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:44790: use of closed network connection
	
	
	==> kube-controller-manager [d812fab0c3962b380c491759af7f74521a69d09562a31347dc712bf8cb742679] <==
	I1219 03:04:16.460940       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:16.461263       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:16.467239       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:16.467260       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:16.467245       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:16.467291       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:16.467339       1 range_allocator.go:177] "Sending events to api server"
	I1219 03:04:16.467382       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1219 03:04:16.467387       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:04:16.467392       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:16.467752       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:16.467765       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:16.467912       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:16.468718       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:16.468722       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:16.468821       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1219 03:04:16.468827       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1219 03:04:16.469085       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:16.470263       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:16.470378       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1219 03:04:16.470587       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-278042"
	I1219 03:04:16.470685       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1219 03:04:16.477009       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-278042" podCIDRs=["10.244.0.0/24"]
	I1219 03:04:16.542737       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:31.472687       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [20a1575fb39576b6d1376cb4b7419d923be7e21c52c5b39b9ef24e18b2232157] <==
	I1219 03:04:18.088682       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:04:18.162146       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:04:18.262631       1 shared_informer.go:377] "Caches are synced"
	I1219 03:04:18.262671       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1219 03:04:18.262823       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:04:18.290421       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:04:18.290483       1 server_linux.go:136] "Using iptables Proxier"
	I1219 03:04:18.297913       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:04:18.298366       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 03:04:18.298430       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:04:18.299834       1 config.go:200] "Starting service config controller"
	I1219 03:04:18.299909       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:04:18.299920       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:04:18.299932       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:04:18.300000       1 config.go:309] "Starting node config controller"
	I1219 03:04:18.300006       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:04:18.300012       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:04:18.300210       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:04:18.300218       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:04:18.401035       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:04:18.401162       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:04:18.401918       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0199947779b1676f1ff35977366518ebb854d1900864a2cc72761c91d0f747b1] <==
	E1219 03:04:09.644184       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1219 03:04:09.644257       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1219 03:04:09.644325       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1219 03:04:09.643406       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1219 03:04:09.645185       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1219 03:04:09.645322       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1219 03:04:09.645405       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1219 03:04:09.645450       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1219 03:04:09.645472       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1219 03:04:09.645488       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1219 03:04:09.645505       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1219 03:04:09.645590       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1219 03:04:09.645853       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1219 03:04:09.645955       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1219 03:04:10.475771       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1219 03:04:10.511685       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1219 03:04:10.519982       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1219 03:04:10.647929       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1219 03:04:10.705216       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1219 03:04:10.738695       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1219 03:04:10.764825       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1219 03:04:10.790192       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1219 03:04:10.796190       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1219 03:04:10.834324       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	I1219 03:04:13.033781       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 19 03:04:17 no-preload-278042 kubelet[2204]: I1219 03:04:17.619370    2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lckv5\" (UniqueName: \"kubernetes.io/projected/b0f7317a-c504-4597-ba97-3d50ee2927c1-kube-api-access-lckv5\") pod \"kindnet-xrp2s\" (UID: \"b0f7317a-c504-4597-ba97-3d50ee2927c1\") " pod="kube-system/kindnet-xrp2s"
	Dec 19 03:04:17 no-preload-278042 kubelet[2204]: I1219 03:04:17.619399    2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cb3af28-e9b4-45b6-80d4-fe8bdadd6911-xtables-lock\") pod \"kube-proxy-g2gm4\" (UID: \"4cb3af28-e9b4-45b6-80d4-fe8bdadd6911\") " pod="kube-system/kube-proxy-g2gm4"
	Dec 19 03:04:17 no-preload-278042 kubelet[2204]: I1219 03:04:17.619423    2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr26h\" (UniqueName: \"kubernetes.io/projected/4cb3af28-e9b4-45b6-80d4-fe8bdadd6911-kube-api-access-xr26h\") pod \"kube-proxy-g2gm4\" (UID: \"4cb3af28-e9b4-45b6-80d4-fe8bdadd6911\") " pod="kube-system/kube-proxy-g2gm4"
	Dec 19 03:04:17 no-preload-278042 kubelet[2204]: I1219 03:04:17.619448    2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b0f7317a-c504-4597-ba97-3d50ee2927c1-cni-cfg\") pod \"kindnet-xrp2s\" (UID: \"b0f7317a-c504-4597-ba97-3d50ee2927c1\") " pod="kube-system/kindnet-xrp2s"
	Dec 19 03:04:18 no-preload-278042 kubelet[2204]: I1219 03:04:18.242280    2204 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-g2gm4" podStartSLOduration=1.242263461 podStartE2EDuration="1.242263461s" podCreationTimestamp="2025-12-19 03:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 03:04:18.241909747 +0000 UTC m=+6.143684272" watchObservedRunningTime="2025-12-19 03:04:18.242263461 +0000 UTC m=+6.144037986"
	Dec 19 03:04:20 no-preload-278042 kubelet[2204]: E1219 03:04:20.087910    2204 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-278042" containerName="kube-apiserver"
	Dec 19 03:04:20 no-preload-278042 kubelet[2204]: I1219 03:04:20.242154    2204 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-xrp2s" podStartSLOduration=1.7269579259999999 podStartE2EDuration="3.242134727s" podCreationTimestamp="2025-12-19 03:04:17 +0000 UTC" firstStartedPulling="2025-12-19 03:04:17.915263949 +0000 UTC m=+5.817038453" lastFinishedPulling="2025-12-19 03:04:19.430440763 +0000 UTC m=+7.332215254" observedRunningTime="2025-12-19 03:04:20.242007848 +0000 UTC m=+8.143782360" watchObservedRunningTime="2025-12-19 03:04:20.242134727 +0000 UTC m=+8.143909246"
	Dec 19 03:04:20 no-preload-278042 kubelet[2204]: E1219 03:04:20.695893    2204 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-278042" containerName="etcd"
	Dec 19 03:04:20 no-preload-278042 kubelet[2204]: E1219 03:04:20.971736    2204 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-278042" containerName="kube-scheduler"
	Dec 19 03:04:24 no-preload-278042 kubelet[2204]: E1219 03:04:24.696543    2204 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-278042" containerName="kube-controller-manager"
	Dec 19 03:04:30 no-preload-278042 kubelet[2204]: E1219 03:04:30.094845    2204 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-278042" containerName="kube-apiserver"
	Dec 19 03:04:30 no-preload-278042 kubelet[2204]: I1219 03:04:30.130790    2204 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 19 03:04:30 no-preload-278042 kubelet[2204]: I1219 03:04:30.209113    2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfqpt\" (UniqueName: \"kubernetes.io/projected/6bb897eb-e856-4660-aa9c-3fac6b610d38-kube-api-access-wfqpt\") pod \"coredns-7d764666f9-vj7lm\" (UID: \"6bb897eb-e856-4660-aa9c-3fac6b610d38\") " pod="kube-system/coredns-7d764666f9-vj7lm"
	Dec 19 03:04:30 no-preload-278042 kubelet[2204]: I1219 03:04:30.209173    2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7114449c-463d-44ef-955c-5dda46333a32-tmp\") pod \"storage-provisioner\" (UID: \"7114449c-463d-44ef-955c-5dda46333a32\") " pod="kube-system/storage-provisioner"
	Dec 19 03:04:30 no-preload-278042 kubelet[2204]: I1219 03:04:30.209290    2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhvnl\" (UniqueName: \"kubernetes.io/projected/7114449c-463d-44ef-955c-5dda46333a32-kube-api-access-fhvnl\") pod \"storage-provisioner\" (UID: \"7114449c-463d-44ef-955c-5dda46333a32\") " pod="kube-system/storage-provisioner"
	Dec 19 03:04:30 no-preload-278042 kubelet[2204]: I1219 03:04:30.209352    2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bb897eb-e856-4660-aa9c-3fac6b610d38-config-volume\") pod \"coredns-7d764666f9-vj7lm\" (UID: \"6bb897eb-e856-4660-aa9c-3fac6b610d38\") " pod="kube-system/coredns-7d764666f9-vj7lm"
	Dec 19 03:04:30 no-preload-278042 kubelet[2204]: E1219 03:04:30.697126    2204 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-278042" containerName="etcd"
	Dec 19 03:04:30 no-preload-278042 kubelet[2204]: E1219 03:04:30.977228    2204 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-278042" containerName="kube-scheduler"
	Dec 19 03:04:31 no-preload-278042 kubelet[2204]: E1219 03:04:31.260594    2204 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vj7lm" containerName="coredns"
	Dec 19 03:04:31 no-preload-278042 kubelet[2204]: I1219 03:04:31.269067    2204 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.269048263 podStartE2EDuration="13.269048263s" podCreationTimestamp="2025-12-19 03:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 03:04:31.268764957 +0000 UTC m=+19.170539469" watchObservedRunningTime="2025-12-19 03:04:31.269048263 +0000 UTC m=+19.170822775"
	Dec 19 03:04:31 no-preload-278042 kubelet[2204]: I1219 03:04:31.285288    2204 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-vj7lm" podStartSLOduration=14.285272331 podStartE2EDuration="14.285272331s" podCreationTimestamp="2025-12-19 03:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 03:04:31.285103767 +0000 UTC m=+19.186878279" watchObservedRunningTime="2025-12-19 03:04:31.285272331 +0000 UTC m=+19.187046853"
	Dec 19 03:04:32 no-preload-278042 kubelet[2204]: E1219 03:04:32.263212    2204 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vj7lm" containerName="coredns"
	Dec 19 03:04:33 no-preload-278042 kubelet[2204]: E1219 03:04:33.266015    2204 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vj7lm" containerName="coredns"
	Dec 19 03:04:33 no-preload-278042 kubelet[2204]: I1219 03:04:33.431677    2204 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn5js\" (UniqueName: \"kubernetes.io/projected/63c824bf-6272-44c8-8874-48b3d0245b2f-kube-api-access-xn5js\") pod \"busybox\" (UID: \"63c824bf-6272-44c8-8874-48b3d0245b2f\") " pod="default/busybox"
	Dec 19 03:04:35 no-preload-278042 kubelet[2204]: I1219 03:04:35.284141    2204 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.886128765 podStartE2EDuration="2.284124885s" podCreationTimestamp="2025-12-19 03:04:33 +0000 UTC" firstStartedPulling="2025-12-19 03:04:33.662342106 +0000 UTC m=+21.564116618" lastFinishedPulling="2025-12-19 03:04:35.060338227 +0000 UTC m=+22.962112738" observedRunningTime="2025-12-19 03:04:35.283928242 +0000 UTC m=+23.185702754" watchObservedRunningTime="2025-12-19 03:04:35.284124885 +0000 UTC m=+23.185899413"
	
	
	==> storage-provisioner [d1988c2a1751690ae9dd755f11d69b87b27c88468b35e70619fb36319a0f84a6] <==
	I1219 03:04:30.544574       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1219 03:04:30.555062       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1219 03:04:30.555108       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1219 03:04:30.557602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:04:30.563746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1219 03:04:30.563902       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1219 03:04:30.564060       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-278042_8a7eacf8-181a-4703-90af-23124c358e25!
	I1219 03:04:30.564134       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cb0d6829-3521-49f2-97ec-2bd38bb4da43", APIVersion:"v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-278042_8a7eacf8-181a-4703-90af-23124c358e25 became leader
	W1219 03:04:30.566205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:04:30.570131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1219 03:04:30.665137       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-278042_8a7eacf8-181a-4703-90af-23124c358e25!
	W1219 03:04:32.573915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:04:32.578901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:04:34.582813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:04:34.588500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:04:36.592756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:04:36.598052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:04:38.602004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:04:38.619988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:04:40.623389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:04:40.678197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:04:42.684678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:04:42.692878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-278042 -n no-preload-278042
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-278042 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-805185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-805185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (315.549123ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:05:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-805185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-805185 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-805185 describe deploy/metrics-server -n kube-system: exit status 1 (77.727028ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-805185 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-805185
helpers_test.go:244: (dbg) docker inspect embed-certs-805185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415",
	        "Created": "2025-12-19T03:04:41.634228453Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 332148,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:04:41.680035119Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415/hostname",
	        "HostsPath": "/var/lib/docker/containers/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415/hosts",
	        "LogPath": "/var/lib/docker/containers/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415-json.log",
	        "Name": "/embed-certs-805185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-805185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-805185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415",
	                "LowerDir": "/var/lib/docker/overlay2/ed796c6e9d2f9edb740649c569172aeee5d6bffb367753798bf2544f5c8616e4-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ed796c6e9d2f9edb740649c569172aeee5d6bffb367753798bf2544f5c8616e4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ed796c6e9d2f9edb740649c569172aeee5d6bffb367753798bf2544f5c8616e4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ed796c6e9d2f9edb740649c569172aeee5d6bffb367753798bf2544f5c8616e4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-805185",
	                "Source": "/var/lib/docker/volumes/embed-certs-805185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-805185",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-805185",
	                "name.minikube.sigs.k8s.io": "embed-certs-805185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1857eae4c5de526cd4caec99805b210bfc3972102eb91cd8255373b1ef0ff9c2",
	            "SandboxKey": "/var/run/docker/netns/1857eae4c5de",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-805185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "67670b4143fc2c858529db8e9ece90091b3a7a00c5465943bbbbea83d055a550",
	                    "EndpointID": "ab1f07316cbc0fc5ae8db6c30073c414cefb635be84f84fb927974d6215f2449",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "92:68:7e:d2:e8:04",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-805185",
	                        "c2b5f77a65ce"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-805185 -n embed-certs-805185
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-805185 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-805185 logs -n 25: (1.197212364s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-821749 sudo cri-dockerd --version                                                                                                                                                                                           │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p calico-821749 sudo crio config                                                                                                                                                                                                             │ calico-821749                │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                             │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ delete  │ -p calico-821749                                                                                                                                                                                                                              │ calico-821749                │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl cat containerd --no-pager                                                                                                                                                                             │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                      │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo cat /etc/containerd/config.toml                                                                                                                                                                                 │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo containerd config dump                                                                                                                                                                                          │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo crio config                                                                                                                                                                                                     │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ delete  │ -p custom-flannel-821749                                                                                                                                                                                                                      │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-433330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-278042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ delete  │ -p disable-driver-mounts-507648                                                                                                                                                                                                               │ disable-driver-mounts-507648 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ stop    │ -p old-k8s-version-433330 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-278042 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-433330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-433330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-278042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-278042 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-805185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:05:00
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:05:00.811825  338816 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:05:00.812140  338816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:00.812154  338816 out.go:374] Setting ErrFile to fd 2...
	I1219 03:05:00.812160  338816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:00.812448  338816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:05:00.813127  338816 out.go:368] Setting JSON to false
	I1219 03:05:00.814745  338816 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2852,"bootTime":1766110649,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:05:00.814830  338816 start.go:143] virtualization: kvm guest
	I1219 03:05:00.816988  338816 out.go:179] * [no-preload-278042] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:05:00.818543  338816 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:05:00.818545  338816 notify.go:221] Checking for updates...
	I1219 03:05:00.819816  338816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:05:00.821563  338816 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:00.822827  338816 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:05:00.824185  338816 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:05:00.825466  338816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:05:00.831448  338816 config.go:182] Loaded profile config "no-preload-278042": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:05:00.832147  338816 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:05:00.862204  338816 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:05:00.862370  338816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:00.934433  338816 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-19 03:05:00.923077867 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:00.934586  338816 docker.go:319] overlay module found
	I1219 03:05:00.937154  338816 out.go:179] * Using the docker driver based on existing profile
	I1219 03:05:00.938308  338816 start.go:309] selected driver: docker
	I1219 03:05:00.938324  338816 start.go:928] validating driver "docker" against &{Name:no-preload-278042 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-278042 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:00.938428  338816 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:05:00.939209  338816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:00.997378  338816 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-19 03:05:00.987371827 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:00.997667  338816 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:00.997838  338816 cni.go:84] Creating CNI manager for ""
	I1219 03:05:00.997920  338816 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:00.997990  338816 start.go:353] cluster config:
	{Name:no-preload-278042 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-278042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:00.999840  338816 out.go:179] * Starting "no-preload-278042" primary control-plane node in "no-preload-278042" cluster
	I1219 03:05:01.000906  338816 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:05:01.002150  338816 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:05:01.003607  338816 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:05:01.003688  338816 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:05:01.003736  338816 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/config.json ...
	I1219 03:05:01.003846  338816 cache.go:107] acquiring lock: {Name:mk4042534b37a078c50eadc76c3b72fca5a085d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:01.003887  338816 cache.go:107] acquiring lock: {Name:mke7e35bfc025797e6268aab9dc90b26c2336a31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:01.003922  338816 cache.go:107] acquiring lock: {Name:mka478e3dbccb645e168d75bf94249d243927311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:01.003971  338816 cache.go:115] /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1219 03:05:01.003982  338816 cache.go:115] /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1219 03:05:01.003988  338816 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 155.716µs
	I1219 03:05:01.004001  338816 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1219 03:05:01.003992  338816 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 76.157µs
	I1219 03:05:01.003978  338816 cache.go:115] /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1219 03:05:01.003971  338816 cache.go:107] acquiring lock: {Name:mk10e329e7d25506a0a2796935772d4b44680659 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:01.004021  338816 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 145.653µs
	I1219 03:05:01.003992  338816 cache.go:107] acquiring lock: {Name:mk5e69a6045de9e68e630d6b1379ad8de497ff66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:01.004032  338816 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1219 03:05:01.004011  338816 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1219 03:05:01.004032  338816 cache.go:107] acquiring lock: {Name:mk78bf1c270ef16831276830937a4a078010d84e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:01.004065  338816 cache.go:115] /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1219 03:05:01.004067  338816 cache.go:115] /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1219 03:05:01.004070  338816 cache.go:115] /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1219 03:05:01.004075  338816 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 124.696µs
	I1219 03:05:01.004090  338816 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1219 03:05:01.004071  338816 cache.go:107] acquiring lock: {Name:mkc144e93b6fbc13859f0863c101486208e08799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:01.004101  338816 cache.go:107] acquiring lock: {Name:mkf0879b2828224e5a74fc93c99778fb85bb6a55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:01.004184  338816 cache.go:115] /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1219 03:05:01.004210  338816 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 197.6µs
	I1219 03:05:01.004235  338816 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1219 03:05:01.004077  338816 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 87.539µs
	I1219 03:05:01.004251  338816 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1219 03:05:01.004078  338816 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 48.527µs
	I1219 03:05:01.004267  338816 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1219 03:05:01.004267  338816 cache.go:115] /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1219 03:05:01.004279  338816 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 228.447µs
	I1219 03:05:01.004301  338816 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1219 03:05:01.004316  338816 cache.go:87] Successfully saved all images to host disk.
	I1219 03:05:01.024229  338816 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:05:01.024248  338816 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:05:01.024265  338816 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:05:01.024298  338816 start.go:360] acquireMachinesLock for no-preload-278042: {Name:mk30a7004ec2e933aa2c562456de05f69cb301f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:01.024357  338816 start.go:364] duration metric: took 39.381µs to acquireMachinesLock for "no-preload-278042"
	I1219 03:05:01.024380  338816 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:05:01.024388  338816 fix.go:54] fixHost starting: 
	I1219 03:05:01.024583  338816 cli_runner.go:164] Run: docker container inspect no-preload-278042 --format={{.State.Status}}
	I1219 03:05:01.041773  338816 fix.go:112] recreateIfNeeded on no-preload-278042: state=Stopped err=<nil>
	W1219 03:05:01.041804  338816 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:04:57.915278  330835 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1219 03:04:57.920246  330835 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1219 03:04:57.920265  330835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1219 03:04:57.933366  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1219 03:04:58.189220  330835 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:04:58.189418  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:04:58.189493  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-805185 minikube.k8s.io/updated_at=2025_12_19T03_04_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6 minikube.k8s.io/name=embed-certs-805185 minikube.k8s.io/primary=true
	I1219 03:04:58.204490  330835 ops.go:34] apiserver oom_adj: -16
	I1219 03:04:58.298969  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:04:58.800036  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:04:59.299490  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:04:59.799880  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:00.299637  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:00.799931  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:01.299100  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:01.799101  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:02.299749  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:02.799440  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:02.869115  330835 kubeadm.go:1114] duration metric: took 4.679815393s to wait for elevateKubeSystemPrivileges
	I1219 03:05:02.869148  330835 kubeadm.go:403] duration metric: took 14.349960475s to StartCluster
	I1219 03:05:02.869180  330835 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:02.869249  330835 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:02.870257  330835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:02.870499  330835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1219 03:05:02.870553  330835 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:05:02.870619  330835 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:05:02.870730  330835 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-805185"
	I1219 03:05:02.870760  330835 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-805185"
	I1219 03:05:02.870772  330835 config.go:182] Loaded profile config "embed-certs-805185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:02.870782  330835 addons.go:70] Setting default-storageclass=true in profile "embed-certs-805185"
	I1219 03:05:02.870791  330835 host.go:66] Checking if "embed-certs-805185" exists ...
	I1219 03:05:02.870806  330835 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-805185"
	I1219 03:05:02.871180  330835 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:02.871327  330835 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:02.873715  330835 out.go:179] * Verifying Kubernetes components...
	I1219 03:05:02.874932  330835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:02.895189  330835 addons.go:239] Setting addon default-storageclass=true in "embed-certs-805185"
	I1219 03:05:02.895251  330835 host.go:66] Checking if "embed-certs-805185" exists ...
	I1219 03:05:02.895421  330835 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:05:03.329138  332512 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1219 03:05:03.329210  332512 kubeadm.go:319] [preflight] Running pre-flight checks
	I1219 03:05:03.329320  332512 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1219 03:05:03.329406  332512 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1219 03:05:03.329461  332512 kubeadm.go:319] OS: Linux
	I1219 03:05:03.329527  332512 kubeadm.go:319] CGROUPS_CPU: enabled
	I1219 03:05:03.329596  332512 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1219 03:05:03.329668  332512 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1219 03:05:03.329741  332512 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1219 03:05:03.329814  332512 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1219 03:05:03.329878  332512 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1219 03:05:03.329940  332512 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1219 03:05:03.330004  332512 kubeadm.go:319] CGROUPS_IO: enabled
	I1219 03:05:03.330095  332512 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1219 03:05:03.330220  332512 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1219 03:05:03.330337  332512 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1219 03:05:03.330416  332512 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1219 03:05:03.332609  332512 out.go:252]   - Generating certificates and keys ...
	I1219 03:05:03.332745  332512 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1219 03:05:03.332839  332512 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1219 03:05:03.332918  332512 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1219 03:05:03.333010  332512 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1219 03:05:03.333086  332512 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1219 03:05:03.333154  332512 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1219 03:05:03.333225  332512 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1219 03:05:03.333420  332512 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-717222 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1219 03:05:03.333526  332512 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1219 03:05:03.333723  332512 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-717222 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1219 03:05:03.333830  332512 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1219 03:05:03.333912  332512 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1219 03:05:03.333972  332512 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1219 03:05:03.334049  332512 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1219 03:05:03.334105  332512 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1219 03:05:03.334170  332512 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1219 03:05:03.334233  332512 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1219 03:05:03.334319  332512 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1219 03:05:03.334383  332512 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1219 03:05:03.334471  332512 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1219 03:05:03.334546  332512 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1219 03:05:03.336445  332512 out.go:252]   - Booting up control plane ...
	I1219 03:05:03.336544  332512 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1219 03:05:03.336637  332512 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1219 03:05:03.336726  332512 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1219 03:05:03.336836  332512 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1219 03:05:03.336933  332512 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1219 03:05:03.337051  332512 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1219 03:05:03.337160  332512 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1219 03:05:03.337210  332512 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1219 03:05:03.337389  332512 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1219 03:05:03.337514  332512 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1219 03:05:03.337589  332512 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001193265s
	I1219 03:05:03.337722  332512 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1219 03:05:03.337824  332512 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I1219 03:05:03.337923  332512 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1219 03:05:03.338017  332512 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1219 03:05:03.338094  332512 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.923604324s
	I1219 03:05:03.338162  332512 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.718802088s
	I1219 03:05:03.338231  332512 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501648314s
	I1219 03:05:03.338348  332512 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1219 03:05:03.338487  332512 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1219 03:05:03.338561  332512 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1219 03:05:03.338818  332512 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-717222 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1219 03:05:03.338881  332512 kubeadm.go:319] [bootstrap-token] Using token: 777u07.wfsafuhj1sljp45a
	I1219 03:05:02.895778  330835 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:02.896665  330835 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:02.896684  330835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:05:02.896783  330835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:02.927058  330835 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:02.927082  330835 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:05:02.927138  330835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:02.929643  330835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:02.952985  330835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:02.976888  330835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1219 03:05:03.018033  330835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:03.053612  330835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:03.071120  330835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:03.143029  330835 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1219 03:05:03.144016  330835 node_ready.go:35] waiting up to 6m0s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:03.341793  332512 out.go:252]   - Configuring RBAC rules ...
	I1219 03:05:03.341945  332512 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1219 03:05:03.342060  332512 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1219 03:05:03.342250  332512 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1219 03:05:03.342430  332512 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1219 03:05:03.342593  332512 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1219 03:05:03.342722  332512 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1219 03:05:03.342910  332512 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1219 03:05:03.342981  332512 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1219 03:05:03.343054  332512 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1219 03:05:03.343064  332512 kubeadm.go:319] 
	I1219 03:05:03.343150  332512 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1219 03:05:03.343161  332512 kubeadm.go:319] 
	I1219 03:05:03.343284  332512 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1219 03:05:03.343298  332512 kubeadm.go:319] 
	I1219 03:05:03.343319  332512 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1219 03:05:03.343390  332512 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1219 03:05:03.343482  332512 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1219 03:05:03.343500  332512 kubeadm.go:319] 
	I1219 03:05:03.343575  332512 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1219 03:05:03.343583  332512 kubeadm.go:319] 
	I1219 03:05:03.343645  332512 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1219 03:05:03.343655  332512 kubeadm.go:319] 
	I1219 03:05:03.343746  332512 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1219 03:05:03.343839  332512 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1219 03:05:03.343939  332512 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1219 03:05:03.343945  332512 kubeadm.go:319] 
	I1219 03:05:03.344050  332512 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1219 03:05:03.344146  332512 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1219 03:05:03.344151  332512 kubeadm.go:319] 
	I1219 03:05:03.344251  332512 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 777u07.wfsafuhj1sljp45a \
	I1219 03:05:03.344378  332512 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e8b10e60f3db527579d1c34bb8f4e490eb8eee3e7862dee81a2c160635afa3a8 \
	I1219 03:05:03.344403  332512 kubeadm.go:319] 	--control-plane 
	I1219 03:05:03.344408  332512 kubeadm.go:319] 
	I1219 03:05:03.344513  332512 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1219 03:05:03.344517  332512 kubeadm.go:319] 
	I1219 03:05:03.344621  332512 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 777u07.wfsafuhj1sljp45a \
	I1219 03:05:03.344776  332512 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e8b10e60f3db527579d1c34bb8f4e490eb8eee3e7862dee81a2c160635afa3a8 
	I1219 03:05:03.344787  332512 cni.go:84] Creating CNI manager for ""
	I1219 03:05:03.344797  332512 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:03.345546  330835 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1219 03:05:03.346443  332512 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1219 03:05:00.102910  338195 out.go:252] * Restarting existing docker container for "old-k8s-version-433330" ...
	I1219 03:05:00.103221  338195 cli_runner.go:164] Run: docker start old-k8s-version-433330
	I1219 03:05:00.531550  338195 cli_runner.go:164] Run: docker container inspect old-k8s-version-433330 --format={{.State.Status}}
	I1219 03:05:00.554813  338195 kic.go:430] container "old-k8s-version-433330" state is running.
	I1219 03:05:00.555252  338195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-433330
	I1219 03:05:00.578791  338195 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/config.json ...
	I1219 03:05:00.579080  338195 machine.go:94] provisionDockerMachine start ...
	I1219 03:05:00.579155  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:00.602239  338195 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:00.602473  338195 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1219 03:05:00.602484  338195 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:05:00.603264  338195 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46560->127.0.0.1:33118: read: connection reset by peer
	I1219 03:05:03.773374  338195 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-433330
	
	I1219 03:05:03.773404  338195 ubuntu.go:182] provisioning hostname "old-k8s-version-433330"
	I1219 03:05:03.773469  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:03.800136  338195 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:03.800480  338195 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1219 03:05:03.800506  338195 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-433330 && echo "old-k8s-version-433330" | sudo tee /etc/hostname
	I1219 03:05:03.980642  338195 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-433330
	
	I1219 03:05:03.980825  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:04.012377  338195 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:04.012723  338195 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1219 03:05:04.012753  338195 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-433330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-433330/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-433330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:05:04.173876  338195 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:05:04.173906  338195 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 03:05:04.173949  338195 ubuntu.go:190] setting up certificates
	I1219 03:05:04.173969  338195 provision.go:84] configureAuth start
	I1219 03:05:04.174050  338195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-433330
	I1219 03:05:04.196880  338195 provision.go:143] copyHostCerts
	I1219 03:05:04.196933  338195 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem, removing ...
	I1219 03:05:04.196946  338195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem
	I1219 03:05:04.197007  338195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 03:05:04.197144  338195 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem, removing ...
	I1219 03:05:04.197157  338195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem
	I1219 03:05:04.197197  338195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 03:05:04.197309  338195 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem, removing ...
	I1219 03:05:04.197322  338195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem
	I1219 03:05:04.197364  338195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 03:05:04.197512  338195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-433330 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-433330]
	I1219 03:05:04.292658  338195 provision.go:177] copyRemoteCerts
	I1219 03:05:04.292750  338195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:05:04.292821  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:04.317808  338195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/old-k8s-version-433330/id_rsa Username:docker}
	I1219 03:05:04.433528  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:05:04.458555  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1219 03:05:04.482338  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 03:05:04.503353  338195 provision.go:87] duration metric: took 329.364483ms to configureAuth
	I1219 03:05:04.503391  338195 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:05:04.503595  338195 config.go:182] Loaded profile config "old-k8s-version-433330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1219 03:05:04.503785  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:04.524501  338195 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:04.524833  338195 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1219 03:05:04.524860  338195 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:05:01.043743  338816 out.go:252] * Restarting existing docker container for "no-preload-278042" ...
	I1219 03:05:01.043821  338816 cli_runner.go:164] Run: docker start no-preload-278042
	I1219 03:05:01.342435  338816 cli_runner.go:164] Run: docker container inspect no-preload-278042 --format={{.State.Status}}
	I1219 03:05:01.366361  338816 kic.go:430] container "no-preload-278042" state is running.
	I1219 03:05:01.366966  338816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-278042
	I1219 03:05:01.391891  338816 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/config.json ...
	I1219 03:05:01.392171  338816 machine.go:94] provisionDockerMachine start ...
	I1219 03:05:01.392261  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:01.415534  338816 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:01.415884  338816 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1219 03:05:01.415902  338816 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:05:01.416658  338816 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33222->127.0.0.1:33123: read: connection reset by peer
	I1219 03:05:04.580995  338816 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-278042
	
	I1219 03:05:04.581030  338816 ubuntu.go:182] provisioning hostname "no-preload-278042"
	I1219 03:05:04.581099  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:04.602209  338816 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:04.602494  338816 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1219 03:05:04.602513  338816 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-278042 && echo "no-preload-278042" | sudo tee /etc/hostname
	I1219 03:05:04.764912  338816 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-278042
	
	I1219 03:05:04.765012  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:04.786295  338816 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:04.786596  338816 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1219 03:05:04.786621  338816 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-278042' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-278042/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-278042' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:05:04.936767  338816 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:05:04.936797  338816 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 03:05:04.936824  338816 ubuntu.go:190] setting up certificates
	I1219 03:05:04.936837  338816 provision.go:84] configureAuth start
	I1219 03:05:04.936888  338816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-278042
	I1219 03:05:04.957845  338816 provision.go:143] copyHostCerts
	I1219 03:05:04.957920  338816 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem, removing ...
	I1219 03:05:04.957937  338816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem
	I1219 03:05:04.958000  338816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 03:05:04.958096  338816 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem, removing ...
	I1219 03:05:04.958106  338816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem
	I1219 03:05:04.958134  338816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 03:05:04.958186  338816 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem, removing ...
	I1219 03:05:04.958194  338816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem
	I1219 03:05:04.958218  338816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 03:05:04.958274  338816 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.no-preload-278042 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-278042]
	I1219 03:05:04.997492  338816 provision.go:177] copyRemoteCerts
	I1219 03:05:04.997548  338816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:05:04.997579  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:05.020166  338816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/no-preload-278042/id_rsa Username:docker}
	I1219 03:05:05.124551  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:05:05.143320  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1219 03:05:05.161475  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:05:05.184277  338816 provision.go:87] duration metric: took 247.423458ms to configureAuth
	I1219 03:05:05.184311  338816 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:05:05.184533  338816 config.go:182] Loaded profile config "no-preload-278042": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:05:05.184679  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:05.206856  338816 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:05.207191  338816 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1219 03:05:05.207233  338816 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:05:05.559981  338816 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:05:05.560014  338816 machine.go:97] duration metric: took 4.167815236s to provisionDockerMachine
	I1219 03:05:05.560030  338816 start.go:293] postStartSetup for "no-preload-278042" (driver="docker")
	I1219 03:05:05.560046  338816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:05:05.560113  338816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:05:05.560164  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:05.581839  338816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/no-preload-278042/id_rsa Username:docker}
	I1219 03:05:05.687357  338816 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:05:05.691365  338816 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:05:05.691398  338816 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:05:05.691411  338816 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 03:05:05.691474  338816 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 03:05:05.691587  338816 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem -> 85362.pem in /etc/ssl/certs
	I1219 03:05:05.691772  338816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:05:05.699679  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:05:05.719170  338816 start.go:296] duration metric: took 159.123492ms for postStartSetup
	I1219 03:05:05.719255  338816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:05:05.719295  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:05.740866  338816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/no-preload-278042/id_rsa Username:docker}
	I1219 03:05:05.037451  338195 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:05:05.037477  338195 machine.go:97] duration metric: took 4.458376726s to provisionDockerMachine
	I1219 03:05:05.037573  338195 start.go:293] postStartSetup for "old-k8s-version-433330" (driver="docker")
	I1219 03:05:05.037589  338195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:05:05.037644  338195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:05:05.037684  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:05.057696  338195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/old-k8s-version-433330/id_rsa Username:docker}
	I1219 03:05:05.159796  338195 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:05:05.164068  338195 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:05:05.164098  338195 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:05:05.164112  338195 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 03:05:05.164158  338195 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 03:05:05.164279  338195 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem -> 85362.pem in /etc/ssl/certs
	I1219 03:05:05.164418  338195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:05:05.173600  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:05:05.194256  338195 start.go:296] duration metric: took 156.665743ms for postStartSetup
	I1219 03:05:05.194339  338195 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:05:05.194411  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:05.214524  338195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/old-k8s-version-433330/id_rsa Username:docker}
	I1219 03:05:05.323851  338195 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:05:05.328539  338195 fix.go:56] duration metric: took 5.265541463s for fixHost
	I1219 03:05:05.328561  338195 start.go:83] releasing machines lock for "old-k8s-version-433330", held for 5.265588686s
	I1219 03:05:05.328620  338195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-433330
	I1219 03:05:05.348534  338195 ssh_runner.go:195] Run: cat /version.json
	I1219 03:05:05.348582  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:05.348648  338195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:05:05.348765  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:05.368174  338195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/old-k8s-version-433330/id_rsa Username:docker}
	I1219 03:05:05.369386  338195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/old-k8s-version-433330/id_rsa Username:docker}
	I1219 03:05:05.468618  338195 ssh_runner.go:195] Run: systemctl --version
	I1219 03:05:05.532150  338195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:05:05.572906  338195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:05:05.578881  338195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:05:05.578950  338195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:05:05.587647  338195 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:05:05.587672  338195 start.go:496] detecting cgroup driver to use...
	I1219 03:05:05.587715  338195 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:05:05.587762  338195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:05:05.602449  338195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:05:05.614770  338195 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:05:05.614837  338195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:05:05.629394  338195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:05:05.643068  338195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:05:05.732404  338195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:05:05.820424  338195 docker.go:234] disabling docker service ...
	I1219 03:05:05.820489  338195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:05:05.835029  338195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:05:05.849083  338195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:05:05.944949  338195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:05:06.047437  338195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:05:06.060193  338195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:05:06.075557  338195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1219 03:05:06.075622  338195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.084882  338195 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 03:05:06.084948  338195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.095824  338195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.105026  338195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.113861  338195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:05:06.122038  338195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.131004  338195 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.140211  338195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.149510  338195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:05:06.156919  338195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:05:06.164379  338195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:06.274569  338195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:05:06.430797  338195 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:05:06.430858  338195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:05:06.435163  338195 start.go:564] Will wait 60s for crictl version
	I1219 03:05:06.435236  338195 ssh_runner.go:195] Run: which crictl
	I1219 03:05:06.439145  338195 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:05:06.464904  338195 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 03:05:06.465012  338195 ssh_runner.go:195] Run: crio --version
	I1219 03:05:06.496536  338195 ssh_runner.go:195] Run: crio --version
	I1219 03:05:06.533933  338195 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1219 03:05:05.843426  338816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:05:05.848564  338816 fix.go:56] duration metric: took 4.824169182s for fixHost
	I1219 03:05:05.848597  338816 start.go:83] releasing machines lock for "no-preload-278042", held for 4.824225362s
	I1219 03:05:05.848657  338816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-278042
	I1219 03:05:05.867876  338816 ssh_runner.go:195] Run: cat /version.json
	I1219 03:05:05.867932  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:05.867985  338816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:05:05.868068  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:05.893780  338816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/no-preload-278042/id_rsa Username:docker}
	I1219 03:05:05.896543  338816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/no-preload-278042/id_rsa Username:docker}
	I1219 03:05:05.993630  338816 ssh_runner.go:195] Run: systemctl --version
	I1219 03:05:06.063431  338816 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:05:06.101038  338816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:05:06.106013  338816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:05:06.106085  338816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:05:06.113838  338816 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:05:06.113860  338816 start.go:496] detecting cgroup driver to use...
	I1219 03:05:06.113894  338816 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:05:06.113943  338816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:05:06.127982  338816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:05:06.142051  338816 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:05:06.142102  338816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:05:06.156354  338816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:05:06.170372  338816 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:05:06.270550  338816 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:05:06.360630  338816 docker.go:234] disabling docker service ...
	I1219 03:05:06.360733  338816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:05:06.376556  338816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:05:06.390325  338816 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:05:06.478371  338816 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:05:06.569492  338816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:05:06.583135  338816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:05:06.599607  338816 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:05:06.599657  338816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.610077  338816 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 03:05:06.610131  338816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.619771  338816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.630334  338816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.641094  338816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:05:06.650596  338816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.660930  338816 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.669881  338816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.680885  338816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:05:06.689816  338816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:05:06.697737  338816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:06.799123  338816 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:05:06.962131  338816 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:05:06.962209  338816 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:05:06.966939  338816 start.go:564] Will wait 60s for crictl version
	I1219 03:05:06.967023  338816 ssh_runner.go:195] Run: which crictl
	I1219 03:05:06.972096  338816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:05:07.007534  338816 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 03:05:07.007600  338816 ssh_runner.go:195] Run: crio --version
	I1219 03:05:07.038097  338816 ssh_runner.go:195] Run: crio --version
	I1219 03:05:07.069654  338816 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1219 03:05:03.348540  332512 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1219 03:05:03.353332  332512 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1219 03:05:03.353350  332512 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1219 03:05:03.367762  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1219 03:05:03.590042  332512 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:05:03.590142  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:03.590173  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-717222 minikube.k8s.io/updated_at=2025_12_19T03_05_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6 minikube.k8s.io/name=default-k8s-diff-port-717222 minikube.k8s.io/primary=true
	I1219 03:05:03.689380  332512 ops.go:34] apiserver oom_adj: -16
	I1219 03:05:03.689428  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:04.189758  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:04.689971  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:05.189749  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:05.689779  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:06.190046  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:06.689620  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:03.347245  330835 addons.go:546] duration metric: took 476.630979ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1219 03:05:03.650437  330835 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-805185" context rescaled to 1 replicas
	W1219 03:05:05.147152  330835 node_ready.go:57] node "embed-certs-805185" has "Ready":"False" status (will retry)
	W1219 03:05:07.147663  330835 node_ready.go:57] node "embed-certs-805185" has "Ready":"False" status (will retry)
	I1219 03:05:06.535159  338195 cli_runner.go:164] Run: docker network inspect old-k8s-version-433330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:05:06.554369  338195 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1219 03:05:06.558757  338195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:06.569671  338195 kubeadm.go:884] updating cluster {Name:old-k8s-version-433330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-433330 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:05:06.569838  338195 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1219 03:05:06.569903  338195 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:06.604287  338195 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:06.604328  338195 crio.go:433] Images already preloaded, skipping extraction
	I1219 03:05:06.604389  338195 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:06.631597  338195 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:06.631622  338195 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:05:06.631631  338195 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1219 03:05:06.631776  338195 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-433330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-433330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:05:06.631902  338195 ssh_runner.go:195] Run: crio config
	I1219 03:05:06.683350  338195 cni.go:84] Creating CNI manager for ""
	I1219 03:05:06.683378  338195 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:06.683395  338195 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:05:06.683426  338195 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-433330 NodeName:old-k8s-version-433330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:05:06.683581  338195 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-433330"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:05:06.683646  338195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1219 03:05:06.692391  338195 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:05:06.692452  338195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:05:06.700694  338195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1219 03:05:06.714596  338195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:05:06.732986  338195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1219 03:05:06.748885  338195 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:05:06.753762  338195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:06.765270  338195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:06.853466  338195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:06.879610  338195 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330 for IP: 192.168.76.2
	I1219 03:05:06.879634  338195 certs.go:195] generating shared ca certs ...
	I1219 03:05:06.879654  338195 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:06.879837  338195 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 03:05:06.879900  338195 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 03:05:06.879916  338195 certs.go:257] generating profile certs ...
	I1219 03:05:06.880036  338195 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/client.key
	I1219 03:05:06.880106  338195 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/apiserver.key.c5e580e0
	I1219 03:05:06.880162  338195 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/proxy-client.key
	I1219 03:05:06.880339  338195 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem (1338 bytes)
	W1219 03:05:06.880392  338195 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536_empty.pem, impossibly tiny 0 bytes
	I1219 03:05:06.880408  338195 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:05:06.880444  338195 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:05:06.880486  338195 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:05:06.880524  338195 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 03:05:06.880587  338195 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:05:06.881384  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:05:06.918067  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:05:06.940891  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:05:06.961602  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 03:05:06.989284  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1219 03:05:07.012446  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:05:07.032887  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:05:07.052627  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:05:07.073776  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:05:07.094304  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem --> /usr/share/ca-certificates/8536.pem (1338 bytes)
	I1219 03:05:07.113571  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /usr/share/ca-certificates/85362.pem (1708 bytes)
	I1219 03:05:07.132783  338195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:05:07.150244  338195 ssh_runner.go:195] Run: openssl version
	I1219 03:05:07.159577  338195 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:05:07.168651  338195 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:05:07.178541  338195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:05:07.183698  338195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:05:07.183792  338195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:05:07.226693  338195 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:05:07.234922  338195 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8536.pem
	I1219 03:05:07.243549  338195 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8536.pem /etc/ssl/certs/8536.pem
	I1219 03:05:07.252924  338195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8536.pem
	I1219 03:05:07.257881  338195 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:33 /usr/share/ca-certificates/8536.pem
	I1219 03:05:07.257959  338195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8536.pem
	I1219 03:05:07.299907  338195 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:05:07.307862  338195 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85362.pem
	I1219 03:05:07.319534  338195 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85362.pem /etc/ssl/certs/85362.pem
	I1219 03:05:07.328295  338195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85362.pem
	I1219 03:05:07.332255  338195 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:33 /usr/share/ca-certificates/85362.pem
	I1219 03:05:07.332322  338195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85362.pem
	I1219 03:05:07.369339  338195 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:05:07.377278  338195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:05:07.381456  338195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:05:07.430152  338195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:05:07.503045  338195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:05:07.580955  338195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:05:07.642970  338195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:05:07.698653  338195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:05:07.757686  338195 kubeadm.go:401] StartCluster: {Name:old-k8s-version-433330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-433330 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:07.757838  338195 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:05:07.757917  338195 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:05:07.795715  338195 cri.go:92] found id: "ba54120ef227f793b4872b17bf830934ba52ead424688a987fd2abd98fa2cc0e"
	I1219 03:05:07.795786  338195 cri.go:92] found id: "dca7ec4a11ad9581ca73f7dc0a44569b981b375142761b6af7e7d3724cdbe386"
	I1219 03:05:07.795797  338195 cri.go:92] found id: "6764bc2ee8b6d353f059a22bdc0c53e92e10b6672c29a9f618bd8d9812741d3b"
	I1219 03:05:07.795803  338195 cri.go:92] found id: "e80d5d62bfdccfdfe3b1ea4a0e9ada6c25b66f5f5a6a80697856ea062adbe100"
	I1219 03:05:07.795808  338195 cri.go:92] found id: ""
	I1219 03:05:07.795857  338195 ssh_runner.go:195] Run: sudo runc list -f json
	W1219 03:05:07.810766  338195 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:05:07Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:05:07.810833  338195 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:05:07.821138  338195 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:05:07.821160  338195 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:05:07.821214  338195 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:05:07.830813  338195 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:05:07.831943  338195 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-433330" does not appear in /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:07.832576  338195 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-4987/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-433330" cluster setting kubeconfig missing "old-k8s-version-433330" context setting]
	I1219 03:05:07.833610  338195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:07.835975  338195 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:05:07.846413  338195 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1219 03:05:07.846449  338195 kubeadm.go:602] duration metric: took 25.282312ms to restartPrimaryControlPlane
	I1219 03:05:07.846460  338195 kubeadm.go:403] duration metric: took 88.786269ms to StartCluster
	I1219 03:05:07.846477  338195 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:07.846534  338195 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:07.848255  338195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:07.848542  338195 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:05:07.848615  338195 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:05:07.848736  338195 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-433330"
	I1219 03:05:07.848763  338195 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-433330"
	W1219 03:05:07.848771  338195 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:05:07.848800  338195 host.go:66] Checking if "old-k8s-version-433330" exists ...
	I1219 03:05:07.848809  338195 config.go:182] Loaded profile config "old-k8s-version-433330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1219 03:05:07.848842  338195 addons.go:70] Setting dashboard=true in profile "old-k8s-version-433330"
	I1219 03:05:07.848859  338195 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-433330"
	I1219 03:05:07.848867  338195 addons.go:239] Setting addon dashboard=true in "old-k8s-version-433330"
	I1219 03:05:07.848874  338195 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-433330"
	W1219 03:05:07.848876  338195 addons.go:248] addon dashboard should already be in state true
	I1219 03:05:07.848915  338195 host.go:66] Checking if "old-k8s-version-433330" exists ...
	I1219 03:05:07.849165  338195 cli_runner.go:164] Run: docker container inspect old-k8s-version-433330 --format={{.State.Status}}
	I1219 03:05:07.849304  338195 cli_runner.go:164] Run: docker container inspect old-k8s-version-433330 --format={{.State.Status}}
	I1219 03:05:07.849396  338195 cli_runner.go:164] Run: docker container inspect old-k8s-version-433330 --format={{.State.Status}}
	I1219 03:05:07.855528  338195 out.go:179] * Verifying Kubernetes components...
	I1219 03:05:07.856889  338195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:07.879295  338195 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:07.879381  338195 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:05:07.879435  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:07.879640  338195 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-433330"
	W1219 03:05:07.879684  338195 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:05:07.879737  338195 host.go:66] Checking if "old-k8s-version-433330" exists ...
	I1219 03:05:07.880382  338195 cli_runner.go:164] Run: docker container inspect old-k8s-version-433330 --format={{.State.Status}}
	I1219 03:05:07.881862  338195 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:05:07.189594  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:07.690351  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:08.189479  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:08.324522  332512 kubeadm.go:1114] duration metric: took 4.734448628s to wait for elevateKubeSystemPrivileges
	I1219 03:05:08.324553  332512 kubeadm.go:403] duration metric: took 16.78586446s to StartCluster
	I1219 03:05:08.324572  332512 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:08.324658  332512 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:08.326911  332512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:08.327319  332512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1219 03:05:08.327490  332512 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:05:08.328010  332512 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:08.327791  332512 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:05:08.328121  332512 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-717222"
	I1219 03:05:08.328138  332512 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-717222"
	I1219 03:05:08.328162  332512 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:05:08.328642  332512 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:08.328946  332512 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-717222"
	I1219 03:05:08.328966  332512 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-717222"
	I1219 03:05:08.329284  332512 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:08.329845  332512 out.go:179] * Verifying Kubernetes components...
	I1219 03:05:08.330984  332512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:08.361776  332512 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-717222"
	I1219 03:05:08.361817  332512 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:05:08.362269  332512 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:08.365940  332512 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:05:07.070863  338816 cli_runner.go:164] Run: docker network inspect no-preload-278042 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:05:07.090826  338816 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1219 03:05:07.095208  338816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:07.107191  338816 kubeadm.go:884] updating cluster {Name:no-preload-278042 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-278042 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:05:07.107316  338816 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:05:07.107349  338816 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:07.143426  338816 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:07.143452  338816 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:05:07.143461  338816 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-rc.1 crio true true} ...
	I1219 03:05:07.143566  338816 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-278042 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-278042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:05:07.143673  338816 ssh_runner.go:195] Run: crio config
	I1219 03:05:07.204551  338816 cni.go:84] Creating CNI manager for ""
	I1219 03:05:07.204574  338816 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:07.204593  338816 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:05:07.204618  338816 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-278042 NodeName:no-preload-278042 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:05:07.204775  338816 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-278042"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:05:07.204856  338816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1219 03:05:07.213896  338816 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:05:07.213983  338816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:05:07.222161  338816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1219 03:05:07.237467  338816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1219 03:05:07.253225  338816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1219 03:05:07.267954  338816 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:05:07.271869  338816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:07.282665  338816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:07.368635  338816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:07.395859  338816 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042 for IP: 192.168.103.2
	I1219 03:05:07.395885  338816 certs.go:195] generating shared ca certs ...
	I1219 03:05:07.395910  338816 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:07.396055  338816 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 03:05:07.396125  338816 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 03:05:07.396145  338816 certs.go:257] generating profile certs ...
	I1219 03:05:07.396242  338816 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/client.key
	I1219 03:05:07.396319  338816 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/apiserver.key.225a496e
	I1219 03:05:07.396365  338816 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/proxy-client.key
	I1219 03:05:07.396499  338816 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem (1338 bytes)
	W1219 03:05:07.396531  338816 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536_empty.pem, impossibly tiny 0 bytes
	I1219 03:05:07.396541  338816 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:05:07.396565  338816 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:05:07.396590  338816 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:05:07.396612  338816 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 03:05:07.396653  338816 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:05:07.397248  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:05:07.424809  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:05:07.450904  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:05:07.484112  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 03:05:07.532227  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1219 03:05:07.568778  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:05:07.597233  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:05:07.633283  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 03:05:07.663320  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:05:07.687923  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem --> /usr/share/ca-certificates/8536.pem (1338 bytes)
	I1219 03:05:07.713957  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /usr/share/ca-certificates/85362.pem (1708 bytes)
	I1219 03:05:07.735390  338816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:05:07.754651  338816 ssh_runner.go:195] Run: openssl version
	I1219 03:05:07.761976  338816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:05:07.770425  338816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:05:07.780764  338816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:05:07.785927  338816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:05:07.785982  338816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:05:07.835166  338816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:05:07.846470  338816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8536.pem
	I1219 03:05:07.860237  338816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8536.pem /etc/ssl/certs/8536.pem
	I1219 03:05:07.871696  338816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8536.pem
	I1219 03:05:07.877191  338816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:33 /usr/share/ca-certificates/8536.pem
	I1219 03:05:07.877254  338816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8536.pem
	I1219 03:05:07.946235  338816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:05:07.957618  338816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85362.pem
	I1219 03:05:07.968122  338816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85362.pem /etc/ssl/certs/85362.pem
	I1219 03:05:07.979583  338816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85362.pem
	I1219 03:05:07.986265  338816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:33 /usr/share/ca-certificates/85362.pem
	I1219 03:05:07.986401  338816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85362.pem
	I1219 03:05:08.050665  338816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:05:08.061489  338816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:05:08.067348  338816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:05:08.134141  338816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:05:08.201735  338816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:05:08.275726  338816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:05:08.344352  338816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:05:08.409074  338816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:05:08.474374  338816 kubeadm.go:401] StartCluster: {Name:no-preload-278042 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-278042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:08.474489  338816 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:05:08.474554  338816 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:05:08.528613  338816 cri.go:92] found id: "5f148a7e487d8dd3e55b516e027c5d508a26bdfe82dca079a9e980052c56ba2a"
	I1219 03:05:08.528639  338816 cri.go:92] found id: "001407ac1b90920309b7c79710e785dd82d392c6a8a43c444682d0837efd8cae"
	I1219 03:05:08.528646  338816 cri.go:92] found id: "973ccccab25764a0f9dd88e6a5d1d5565060554451f1dcc3dd0e68a9aed4c9f2"
	I1219 03:05:08.528651  338816 cri.go:92] found id: "821b9cbc72eb687248584e5b24c8fee86a84d2d127bb4360eeb9e0bc0f2c67ec"
	I1219 03:05:08.528656  338816 cri.go:92] found id: ""
	I1219 03:05:08.528698  338816 ssh_runner.go:195] Run: sudo runc list -f json
	W1219 03:05:08.554108  338816 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:05:08Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:05:08.554182  338816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:05:08.568145  338816 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:05:08.568218  338816 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:05:08.568321  338816 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:05:08.581222  338816 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:05:08.582541  338816 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-278042" does not appear in /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:08.583742  338816 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-4987/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-278042" cluster setting kubeconfig missing "no-preload-278042" context setting]
	I1219 03:05:08.585602  338816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:08.588255  338816 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:05:08.604210  338816 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1219 03:05:08.604347  338816 kubeadm.go:602] duration metric: took 36.110072ms to restartPrimaryControlPlane
	I1219 03:05:08.604362  338816 kubeadm.go:403] duration metric: took 129.998216ms to StartCluster
	I1219 03:05:08.604495  338816 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:08.604622  338816 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:08.607318  338816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:08.607866  338816 config.go:182] Loaded profile config "no-preload-278042": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:05:08.607997  338816 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:05:08.608095  338816 addons.go:70] Setting storage-provisioner=true in profile "no-preload-278042"
	I1219 03:05:08.608111  338816 addons.go:239] Setting addon storage-provisioner=true in "no-preload-278042"
	W1219 03:05:08.608119  338816 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:05:08.608148  338816 host.go:66] Checking if "no-preload-278042" exists ...
	I1219 03:05:08.608655  338816 cli_runner.go:164] Run: docker container inspect no-preload-278042 --format={{.State.Status}}
	I1219 03:05:08.608956  338816 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:05:08.609157  338816 addons.go:70] Setting dashboard=true in profile "no-preload-278042"
	I1219 03:05:08.609187  338816 addons.go:239] Setting addon dashboard=true in "no-preload-278042"
	W1219 03:05:08.609224  338816 addons.go:248] addon dashboard should already be in state true
	I1219 03:05:08.609250  338816 host.go:66] Checking if "no-preload-278042" exists ...
	I1219 03:05:08.609963  338816 cli_runner.go:164] Run: docker container inspect no-preload-278042 --format={{.State.Status}}
	I1219 03:05:08.610134  338816 addons.go:70] Setting default-storageclass=true in profile "no-preload-278042"
	I1219 03:05:08.610171  338816 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-278042"
	I1219 03:05:08.610441  338816 cli_runner.go:164] Run: docker container inspect no-preload-278042 --format={{.State.Status}}
	I1219 03:05:08.612260  338816 out.go:179] * Verifying Kubernetes components...
	I1219 03:05:08.613459  338816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:08.650159  338816 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:08.650253  338816 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:05:08.650380  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:08.650932  338816 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:05:08.367066  332512 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:08.367116  332512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:05:08.367199  332512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:08.400056  332512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:08.405801  332512 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:08.405827  332512 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:05:08.405883  332512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:08.442051  332512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:08.495984  332512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1219 03:05:08.576504  332512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:08.638126  332512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:08.714925  332512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:08.903440  332512 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1219 03:05:08.905932  332512 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:05:09.140434  332512 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1219 03:05:07.883077  338195 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:07.883135  338195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:05:07.883213  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:07.910300  338195 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:07.910324  338195 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:05:07.910382  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:07.912917  338195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/old-k8s-version-433330/id_rsa Username:docker}
	I1219 03:05:07.913769  338195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/old-k8s-version-433330/id_rsa Username:docker}
	I1219 03:05:07.945627  338195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/old-k8s-version-433330/id_rsa Username:docker}
	I1219 03:05:08.042651  338195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:08.066541  338195 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-433330" to be "Ready" ...
	I1219 03:05:08.073928  338195 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:05:08.074110  338195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:08.078080  338195 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:05:08.097406  338195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:08.652002  338816 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:08.652056  338816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:05:08.652162  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:08.660192  338816 addons.go:239] Setting addon default-storageclass=true in "no-preload-278042"
	W1219 03:05:08.660219  338816 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:05:08.660249  338816 host.go:66] Checking if "no-preload-278042" exists ...
	I1219 03:05:08.660714  338816 cli_runner.go:164] Run: docker container inspect no-preload-278042 --format={{.State.Status}}
	I1219 03:05:08.693862  338816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/no-preload-278042/id_rsa Username:docker}
	I1219 03:05:08.694594  338816 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:08.694612  338816 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:05:08.694662  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:08.697825  338816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/no-preload-278042/id_rsa Username:docker}
	I1219 03:05:08.733299  338816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/no-preload-278042/id_rsa Username:docker}
	I1219 03:05:08.842079  338816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:08.844314  338816 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:05:08.849998  338816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:08.867204  338816 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:05:08.867451  338816 node_ready.go:35] waiting up to 6m0s for node "no-preload-278042" to be "Ready" ...
	I1219 03:05:08.880152  338816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:10.315153  338816 node_ready.go:49] node "no-preload-278042" is "Ready"
	I1219 03:05:10.315193  338816 node_ready.go:38] duration metric: took 1.447691115s for node "no-preload-278042" to be "Ready" ...
	I1219 03:05:10.315209  338816 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:10.315268  338816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:05:09.141786  332512 addons.go:546] duration metric: took 813.994932ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1219 03:05:09.410032  332512 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-717222" context rescaled to 1 replicas
	W1219 03:05:10.909328  332512 node_ready.go:57] node "default-k8s-diff-port-717222" has "Ready":"False" status (will retry)
	W1219 03:05:09.148113  330835 node_ready.go:57] node "embed-certs-805185" has "Ready":"False" status (will retry)
	W1219 03:05:11.148219  330835 node_ready.go:57] node "embed-certs-805185" has "Ready":"False" status (will retry)
	I1219 03:05:10.594634  338195 node_ready.go:49] node "old-k8s-version-433330" is "Ready"
	I1219 03:05:10.594676  338195 node_ready.go:38] duration metric: took 2.528095005s for node "old-k8s-version-433330" to be "Ready" ...
	I1219 03:05:10.594694  338195 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:10.594785  338195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:05:11.438208  338195 ssh_runner.go:235] Completed: test -f /usr/local/bin/helm: (3.3600947s)
	I1219 03:05:11.438243  338195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.364104191s)
	I1219 03:05:11.438299  338195 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:05:11.438319  338195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.340863748s)
	I1219 03:05:11.438418  338195 api_server.go:72] duration metric: took 3.589841435s to wait for apiserver process to appear ...
	I1219 03:05:11.438433  338195 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:11.438450  338195 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:05:11.445459  338195 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1219 03:05:11.445494  338195 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1219 03:05:11.939501  338195 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:05:11.944271  338195 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1219 03:05:11.946314  338195 api_server.go:141] control plane version: v1.28.0
	I1219 03:05:11.946344  338195 api_server.go:131] duration metric: took 507.904118ms to wait for apiserver health ...
	I1219 03:05:11.946355  338195 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:11.950741  338195 system_pods.go:59] 8 kube-system pods found
	I1219 03:05:11.950809  338195 system_pods.go:61] "coredns-5dd5756b68-vp79f" [9fcc07be-0cde-4964-af90-fb09218728e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:11.950821  338195 system_pods.go:61] "etcd-old-k8s-version-433330" [e7e65e56-a92a-43ec-8dda-93b521937bef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:11.950840  338195 system_pods.go:61] "kindnet-hm2sz" [c6df6f60-75af-46bf-9a07-9644745d5f72] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:11.950853  338195 system_pods.go:61] "kube-apiserver-old-k8s-version-433330" [50ae6467-8e2c-41f5-9c9c-eda6741c41f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:11.950861  338195 system_pods.go:61] "kube-controller-manager-old-k8s-version-433330" [f680d80e-8a0e-486d-8e26-91e124efe760] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:11.950873  338195 system_pods.go:61] "kube-proxy-wdrk8" [b2738e98-0383-41b2-b183-a13a2a915c6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:11.950881  338195 system_pods.go:61] "kube-scheduler-old-k8s-version-433330" [465a3df8-5c4b-44d0-aaa1-b4b1e35e0d67] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:11.950890  338195 system_pods.go:61] "storage-provisioner" [0fba7aca-106d-40c8-8651-91680e4fedcc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:11.950898  338195 system_pods.go:74] duration metric: took 4.535468ms to wait for pod list to return data ...
	I1219 03:05:11.950910  338195 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:11.956062  338195 default_sa.go:45] found service account: "default"
	I1219 03:05:11.956091  338195 default_sa.go:55] duration metric: took 5.174812ms for default service account to be created ...
	I1219 03:05:11.956104  338195 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:11.959815  338195 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:11.959849  338195 system_pods.go:89] "coredns-5dd5756b68-vp79f" [9fcc07be-0cde-4964-af90-fb09218728e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:11.959863  338195 system_pods.go:89] "etcd-old-k8s-version-433330" [e7e65e56-a92a-43ec-8dda-93b521937bef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:11.959876  338195 system_pods.go:89] "kindnet-hm2sz" [c6df6f60-75af-46bf-9a07-9644745d5f72] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:11.959886  338195 system_pods.go:89] "kube-apiserver-old-k8s-version-433330" [50ae6467-8e2c-41f5-9c9c-eda6741c41f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:11.959896  338195 system_pods.go:89] "kube-controller-manager-old-k8s-version-433330" [f680d80e-8a0e-486d-8e26-91e124efe760] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:11.959909  338195 system_pods.go:89] "kube-proxy-wdrk8" [b2738e98-0383-41b2-b183-a13a2a915c6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:11.959919  338195 system_pods.go:89] "kube-scheduler-old-k8s-version-433330" [465a3df8-5c4b-44d0-aaa1-b4b1e35e0d67] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:11.959928  338195 system_pods.go:89] "storage-provisioner" [0fba7aca-106d-40c8-8651-91680e4fedcc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:11.959939  338195 system_pods.go:126] duration metric: took 3.828183ms to wait for k8s-apps to be running ...
	I1219 03:05:11.959951  338195 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:11.960013  338195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:12.440481  338195 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.002148257s)
	I1219 03:05:12.440525  338195 system_svc.go:56] duration metric: took 480.567587ms WaitForService to wait for kubelet
	I1219 03:05:12.440544  338195 kubeadm.go:587] duration metric: took 4.591969486s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:12.440564  338195 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:12.440569  338195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:05:12.443630  338195 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:12.443661  338195 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:12.443680  338195 node_conditions.go:105] duration metric: took 3.110203ms to run NodePressure ...
	I1219 03:05:12.443694  338195 start.go:242] waiting for startup goroutines ...
	I1219 03:05:11.019356  338816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.169299179s)
	I1219 03:05:11.019382  338816 ssh_runner.go:235] Completed: test -f /usr/local/bin/helm: (2.152150538s)
	I1219 03:05:11.019432  338816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.139259984s)
	I1219 03:05:11.019463  338816 api_server.go:72] duration metric: took 2.410472354s to wait for apiserver process to appear ...
	I1219 03:05:11.019474  338816 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:11.019499  338816 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1219 03:05:11.019872  338816 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:05:11.025534  338816 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:11.025568  338816 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:11.520552  338816 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1219 03:05:11.525350  338816 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1219 03:05:11.526441  338816 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 03:05:11.526466  338816 api_server.go:131] duration metric: took 506.986603ms to wait for apiserver health ...
	I1219 03:05:11.526475  338816 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:11.530506  338816 system_pods.go:59] 8 kube-system pods found
	I1219 03:05:11.530551  338816 system_pods.go:61] "coredns-7d764666f9-vj7lm" [6bb897eb-e856-4660-aa9c-3fac6b610d38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:11.530565  338816 system_pods.go:61] "etcd-no-preload-278042" [a9dcae0a-af63-4eb2-a240-c68ab749763e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:11.530583  338816 system_pods.go:61] "kindnet-xrp2s" [b0f7317a-c504-4597-ba97-3d50ee2927c1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:11.530593  338816 system_pods.go:61] "kube-apiserver-no-preload-278042" [ac835fd3-def8-49e8-bee3-b76ee0667ca6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:11.530608  338816 system_pods.go:61] "kube-controller-manager-no-preload-278042" [0938d60f-d3e9-457e-ac68-8cba5d210c11] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:11.530618  338816 system_pods.go:61] "kube-proxy-g2gm4" [4cb3af28-e9b4-45b6-80d4-fe8bdadd6911] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:11.530632  338816 system_pods.go:61] "kube-scheduler-no-preload-278042" [bb8f444d-8eae-4359-917f-04165ccecf47] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:11.530643  338816 system_pods.go:61] "storage-provisioner" [7114449c-463d-44ef-955c-5dda46333a32] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:11.530651  338816 system_pods.go:74] duration metric: took 4.169725ms to wait for pod list to return data ...
	I1219 03:05:11.530660  338816 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:11.533311  338816 default_sa.go:45] found service account: "default"
	I1219 03:05:11.533333  338816 default_sa.go:55] duration metric: took 2.662455ms for default service account to be created ...
	I1219 03:05:11.533342  338816 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:11.536223  338816 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:11.536255  338816 system_pods.go:89] "coredns-7d764666f9-vj7lm" [6bb897eb-e856-4660-aa9c-3fac6b610d38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:11.536267  338816 system_pods.go:89] "etcd-no-preload-278042" [a9dcae0a-af63-4eb2-a240-c68ab749763e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:11.536276  338816 system_pods.go:89] "kindnet-xrp2s" [b0f7317a-c504-4597-ba97-3d50ee2927c1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:11.536291  338816 system_pods.go:89] "kube-apiserver-no-preload-278042" [ac835fd3-def8-49e8-bee3-b76ee0667ca6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:11.536298  338816 system_pods.go:89] "kube-controller-manager-no-preload-278042" [0938d60f-d3e9-457e-ac68-8cba5d210c11] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:11.536306  338816 system_pods.go:89] "kube-proxy-g2gm4" [4cb3af28-e9b4-45b6-80d4-fe8bdadd6911] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:11.536313  338816 system_pods.go:89] "kube-scheduler-no-preload-278042" [bb8f444d-8eae-4359-917f-04165ccecf47] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:11.536320  338816 system_pods.go:89] "storage-provisioner" [7114449c-463d-44ef-955c-5dda46333a32] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:11.536329  338816 system_pods.go:126] duration metric: took 2.980203ms to wait for k8s-apps to be running ...
	I1219 03:05:11.536337  338816 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:11.536385  338816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:12.894015  338816 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.874104985s)
	I1219 03:05:12.894083  338816 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.357672898s)
	I1219 03:05:12.894105  338816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:05:12.894115  338816 system_svc.go:56] duration metric: took 1.357775052s WaitForService to wait for kubelet
	I1219 03:05:12.894126  338816 kubeadm.go:587] duration metric: took 4.285135318s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:12.894151  338816 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:12.897676  338816 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:12.897720  338816 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:12.897737  338816 node_conditions.go:105] duration metric: took 3.579647ms to run NodePressure ...
	I1219 03:05:12.897752  338816 start.go:242] waiting for startup goroutines ...
	I1219 03:05:15.854695  338816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (2.960550336s)
	I1219 03:05:15.854808  338816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:16.045446  338816 addons.go:500] Verifying addon dashboard=true in "no-preload-278042"
	I1219 03:05:16.045845  338816 cli_runner.go:164] Run: docker container inspect no-preload-278042 --format={{.State.Status}}
	I1219 03:05:16.070451  338816 out.go:179] * Verifying dashboard addon...
	I1219 03:05:15.413795  338195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (2.973186291s)
	I1219 03:05:15.413880  338195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:16.135613  338195 addons.go:500] Verifying addon dashboard=true in "old-k8s-version-433330"
	I1219 03:05:16.135982  338195 cli_runner.go:164] Run: docker container inspect old-k8s-version-433330 --format={{.State.Status}}
	I1219 03:05:16.156407  338195 out.go:179] * Verifying dashboard addon...
	W1219 03:05:12.910131  332512 node_ready.go:57] node "default-k8s-diff-port-717222" has "Ready":"False" status (will retry)
	W1219 03:05:15.410090  332512 node_ready.go:57] node "default-k8s-diff-port-717222" has "Ready":"False" status (will retry)
	W1219 03:05:13.647770  330835 node_ready.go:57] node "embed-certs-805185" has "Ready":"False" status (will retry)
	I1219 03:05:15.647507  330835 node_ready.go:49] node "embed-certs-805185" is "Ready"
	I1219 03:05:15.647547  330835 node_ready.go:38] duration metric: took 12.50348426s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:15.647565  330835 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:15.647622  330835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:05:15.665212  330835 api_server.go:72] duration metric: took 12.794611643s to wait for apiserver process to appear ...
	I1219 03:05:15.665242  330835 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:15.665272  330835 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:15.670942  330835 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1219 03:05:15.672243  330835 api_server.go:141] control plane version: v1.34.3
	I1219 03:05:15.672276  330835 api_server.go:131] duration metric: took 7.026021ms to wait for apiserver health ...
	I1219 03:05:15.672288  330835 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:15.676548  330835 system_pods.go:59] 8 kube-system pods found
	I1219 03:05:15.676588  330835 system_pods.go:61] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:15.676597  330835 system_pods.go:61] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running
	I1219 03:05:15.676606  330835 system_pods.go:61] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running
	I1219 03:05:15.676612  330835 system_pods.go:61] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running
	I1219 03:05:15.676621  330835 system_pods.go:61] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running
	I1219 03:05:15.676625  330835 system_pods.go:61] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running
	I1219 03:05:15.676635  330835 system_pods.go:61] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running
	I1219 03:05:15.676643  330835 system_pods.go:61] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:15.676655  330835 system_pods.go:74] duration metric: took 4.359785ms to wait for pod list to return data ...
	I1219 03:05:15.676667  330835 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:15.679531  330835 default_sa.go:45] found service account: "default"
	I1219 03:05:15.679562  330835 default_sa.go:55] duration metric: took 2.88404ms for default service account to be created ...
	I1219 03:05:15.679574  330835 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:15.686023  330835 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:15.686069  330835 system_pods.go:89] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:15.686080  330835 system_pods.go:89] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running
	I1219 03:05:15.686092  330835 system_pods.go:89] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running
	I1219 03:05:15.686098  330835 system_pods.go:89] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running
	I1219 03:05:15.686105  330835 system_pods.go:89] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running
	I1219 03:05:15.686110  330835 system_pods.go:89] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running
	I1219 03:05:15.686115  330835 system_pods.go:89] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running
	I1219 03:05:15.686123  330835 system_pods.go:89] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:15.686155  330835 retry.go:31] will retry after 250.846843ms: missing components: kube-dns
	I1219 03:05:15.942684  330835 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:15.942752  330835 system_pods.go:89] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:15.942761  330835 system_pods.go:89] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running
	I1219 03:05:15.942770  330835 system_pods.go:89] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running
	I1219 03:05:15.942776  330835 system_pods.go:89] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running
	I1219 03:05:15.942783  330835 system_pods.go:89] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running
	I1219 03:05:15.942788  330835 system_pods.go:89] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running
	I1219 03:05:15.942793  330835 system_pods.go:89] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running
	I1219 03:05:15.942802  330835 system_pods.go:89] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:15.942822  330835 retry.go:31] will retry after 299.918101ms: missing components: kube-dns
	I1219 03:05:16.246247  330835 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:16.246283  330835 system_pods.go:89] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running
	I1219 03:05:16.246293  330835 system_pods.go:89] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running
	I1219 03:05:16.246299  330835 system_pods.go:89] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running
	I1219 03:05:16.246305  330835 system_pods.go:89] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running
	I1219 03:05:16.246312  330835 system_pods.go:89] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running
	I1219 03:05:16.246317  330835 system_pods.go:89] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running
	I1219 03:05:16.246322  330835 system_pods.go:89] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running
	I1219 03:05:16.246328  330835 system_pods.go:89] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running
	I1219 03:05:16.246339  330835 system_pods.go:126] duration metric: took 566.755252ms to wait for k8s-apps to be running ...
	I1219 03:05:16.246352  330835 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:16.246396  330835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:16.261160  330835 system_svc.go:56] duration metric: took 14.796481ms WaitForService to wait for kubelet
	I1219 03:05:16.261200  330835 kubeadm.go:587] duration metric: took 13.390608365s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:16.261220  330835 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:16.263958  330835 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:16.263983  330835 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:16.263996  330835 node_conditions.go:105] duration metric: took 2.770433ms to run NodePressure ...
	I1219 03:05:16.264008  330835 start.go:242] waiting for startup goroutines ...
	I1219 03:05:16.264017  330835 start.go:247] waiting for cluster config update ...
	I1219 03:05:16.264029  330835 start.go:256] writing updated cluster config ...
	I1219 03:05:16.264312  330835 ssh_runner.go:195] Run: rm -f paused
	I1219 03:05:16.268532  330835 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:05:16.347111  330835 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:16.352357  330835 pod_ready.go:94] pod "coredns-66bc5c9577-8gphx" is "Ready"
	I1219 03:05:16.352382  330835 pod_ready.go:86] duration metric: took 5.238735ms for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:16.354520  330835 pod_ready.go:83] waiting for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:16.358398  330835 pod_ready.go:94] pod "etcd-embed-certs-805185" is "Ready"
	I1219 03:05:16.358420  330835 pod_ready.go:86] duration metric: took 3.879167ms for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:16.360455  330835 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:16.364149  330835 pod_ready.go:94] pod "kube-apiserver-embed-certs-805185" is "Ready"
	I1219 03:05:16.364168  330835 pod_ready.go:86] duration metric: took 3.693544ms for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:16.365862  330835 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:16.672663  330835 pod_ready.go:94] pod "kube-controller-manager-embed-certs-805185" is "Ready"
	I1219 03:05:16.672697  330835 pod_ready.go:86] duration metric: took 306.817483ms for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:16.874110  330835 pod_ready.go:83] waiting for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:17.272998  330835 pod_ready.go:94] pod "kube-proxy-p8pqg" is "Ready"
	I1219 03:05:17.273027  330835 pod_ready.go:86] duration metric: took 398.889124ms for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:17.473660  330835 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:17.873324  330835 pod_ready.go:94] pod "kube-scheduler-embed-certs-805185" is "Ready"
	I1219 03:05:17.873358  330835 pod_ready.go:86] duration metric: took 399.668419ms for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:17.873373  330835 pod_ready.go:40] duration metric: took 1.604806437s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:05:17.928904  330835 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:05:17.931254  330835 out.go:179] * Done! kubectl is now configured to use "embed-certs-805185" cluster and "default" namespace by default
	I1219 03:05:16.159862  338195 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:05:16.162760  338195 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:05:16.072616  338816 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:05:16.075989  338816 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:05:16.076006  338816 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.577520  338816 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.075642  338816 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.576941  338816 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.077640  338816 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.576828  338816 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.078319  338816 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.576534  338816 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.076165  338816 kapi.go:107] duration metric: took 4.003544668s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:05:20.077978  338816 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-278042 addons enable metrics-server
	
	I1219 03:05:20.080004  338816 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1219 03:05:20.085253  338816 addons.go:546] duration metric: took 11.477254429s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:05:20.085362  338816 start.go:247] waiting for cluster config update ...
	I1219 03:05:20.085378  338816 start.go:256] writing updated cluster config ...
	I1219 03:05:20.085793  338816 ssh_runner.go:195] Run: rm -f paused
	I1219 03:05:20.093277  338816 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:05:20.097833  338816 pod_ready.go:83] waiting for pod "coredns-7d764666f9-vj7lm" in "kube-system" namespace to be "Ready" or be gone ...
	W1219 03:05:17.909999  332512 node_ready.go:57] node "default-k8s-diff-port-717222" has "Ready":"False" status (will retry)
	W1219 03:05:19.910131  332512 node_ready.go:57] node "default-k8s-diff-port-717222" has "Ready":"False" status (will retry)
	I1219 03:05:21.910035  332512 node_ready.go:49] node "default-k8s-diff-port-717222" is "Ready"
	I1219 03:05:21.910077  332512 node_ready.go:38] duration metric: took 13.004087015s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:05:21.910093  332512 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:21.910153  332512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:05:21.926211  332512 api_server.go:72] duration metric: took 13.598689266s to wait for apiserver process to appear ...
	I1219 03:05:21.926238  332512 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:21.926261  332512 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:05:21.931204  332512 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1219 03:05:21.932318  332512 api_server.go:141] control plane version: v1.34.3
	I1219 03:05:21.932346  332512 api_server.go:131] duration metric: took 6.100419ms to wait for apiserver health ...
	I1219 03:05:21.932357  332512 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:21.936179  332512 system_pods.go:59] 8 kube-system pods found
	I1219 03:05:21.936208  332512 system_pods.go:61] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:21.936214  332512 system_pods.go:61] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running
	I1219 03:05:21.936219  332512 system_pods.go:61] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:05:21.936222  332512 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running
	I1219 03:05:21.936226  332512 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running
	I1219 03:05:21.936230  332512 system_pods.go:61] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:05:21.936234  332512 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running
	I1219 03:05:21.936242  332512 system_pods.go:61] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:21.936251  332512 system_pods.go:74] duration metric: took 3.886862ms to wait for pod list to return data ...
	I1219 03:05:21.936263  332512 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:21.938733  332512 default_sa.go:45] found service account: "default"
	I1219 03:05:21.938754  332512 default_sa.go:55] duration metric: took 2.48343ms for default service account to be created ...
	I1219 03:05:21.938762  332512 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:21.941900  332512 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:21.941943  332512 system_pods.go:89] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:21.941953  332512 system_pods.go:89] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running
	I1219 03:05:21.941963  332512 system_pods.go:89] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:05:21.941991  332512 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running
	I1219 03:05:21.941999  332512 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running
	I1219 03:05:21.942010  332512 system_pods.go:89] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:05:21.942017  332512 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running
	I1219 03:05:21.942038  332512 system_pods.go:89] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:21.942077  332512 retry.go:31] will retry after 190.375881ms: missing components: kube-dns
	I1219 03:05:22.137040  332512 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:22.137075  332512 system_pods.go:89] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:22.137082  332512 system_pods.go:89] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running
	I1219 03:05:22.137091  332512 system_pods.go:89] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:05:22.137102  332512 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running
	I1219 03:05:22.137113  332512 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running
	I1219 03:05:22.137119  332512 system_pods.go:89] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:05:22.137125  332512 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running
	I1219 03:05:22.137133  332512 system_pods.go:89] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:22.137152  332512 retry.go:31] will retry after 271.345441ms: missing components: kube-dns
	I1219 03:05:22.413383  332512 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:22.413426  332512 system_pods.go:89] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:22.413449  332512 system_pods.go:89] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running
	I1219 03:05:22.413466  332512 system_pods.go:89] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:05:22.413473  332512 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running
	I1219 03:05:22.413480  332512 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running
	I1219 03:05:22.413488  332512 system_pods.go:89] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:05:22.413495  332512 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running
	I1219 03:05:22.413507  332512 system_pods.go:89] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:22.413530  332512 retry.go:31] will retry after 362.736045ms: missing components: kube-dns
	I1219 03:05:22.781610  332512 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:22.781658  332512 system_pods.go:89] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running
	I1219 03:05:22.781667  332512 system_pods.go:89] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running
	I1219 03:05:22.781674  332512 system_pods.go:89] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:05:22.781680  332512 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running
	I1219 03:05:22.781687  332512 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running
	I1219 03:05:22.781693  332512 system_pods.go:89] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:05:22.781698  332512 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running
	I1219 03:05:22.781732  332512 system_pods.go:89] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:05:22.781743  332512 system_pods.go:126] duration metric: took 842.974471ms to wait for k8s-apps to be running ...
	I1219 03:05:22.781758  332512 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:22.781811  332512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:22.799460  332512 system_svc.go:56] duration metric: took 17.692998ms WaitForService to wait for kubelet
	I1219 03:05:22.799488  332512 kubeadm.go:587] duration metric: took 14.471971429s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:22.799513  332512 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:22.802953  332512 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:22.802983  332512 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:22.803004  332512 node_conditions.go:105] duration metric: took 3.48447ms to run NodePressure ...
	I1219 03:05:22.803018  332512 start.go:242] waiting for startup goroutines ...
	I1219 03:05:22.803031  332512 start.go:247] waiting for cluster config update ...
	I1219 03:05:22.803045  332512 start.go:256] writing updated cluster config ...
	I1219 03:05:22.803366  332512 ssh_runner.go:195] Run: rm -f paused
	I1219 03:05:22.808138  332512 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:05:22.881856  332512 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.887118  332512 pod_ready.go:94] pod "coredns-66bc5c9577-dskxl" is "Ready"
	I1219 03:05:22.887149  332512 pod_ready.go:86] duration metric: took 5.261763ms for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.889574  332512 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.894034  332512 pod_ready.go:94] pod "etcd-default-k8s-diff-port-717222" is "Ready"
	I1219 03:05:22.894059  332512 pod_ready.go:86] duration metric: took 4.396584ms for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.896328  332512 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.900542  332512 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-717222" is "Ready"
	I1219 03:05:22.900567  332512 pod_ready.go:86] duration metric: took 4.218046ms for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.902641  332512 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:23.214058  332512 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-717222" is "Ready"
	I1219 03:05:23.214114  332512 pod_ready.go:86] duration metric: took 311.451444ms for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:23.560693  332512 pod_ready.go:83] waiting for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:23.813228  332512 pod_ready.go:94] pod "kube-proxy-mr7c8" is "Ready"
	I1219 03:05:23.813263  332512 pod_ready.go:86] duration metric: took 252.512477ms for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:24.013759  332512 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:24.413305  332512 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-717222" is "Ready"
	I1219 03:05:24.413337  332512 pod_ready.go:86] duration metric: took 399.543508ms for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:24.413351  332512 pod_ready.go:40] duration metric: took 1.605180295s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:05:24.471536  332512 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:05:24.475655  332512 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-717222" cluster and "default" namespace by default
	I1219 03:05:24.164758  338195 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:05:24.164785  338195 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.668000  338195 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:05:22.106320  338816 pod_ready.go:104] pod "coredns-7d764666f9-vj7lm" is not "Ready", error: <nil>
	W1219 03:05:24.607413  338816 pod_ready.go:104] pod "coredns-7d764666f9-vj7lm" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 19 03:05:15 embed-certs-805185 crio[773]: time="2025-12-19T03:05:15.735998871Z" level=info msg="Starting container: adfc53aee98233e5eeb5807b962886f86d5367de2d2e60081032654c9d0b7c2f" id=aa0950e0-2ac9-45b4-9fb3-550be510bd9b name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:05:15 embed-certs-805185 crio[773]: time="2025-12-19T03:05:15.74060996Z" level=info msg="Started container" PID=1891 containerID=adfc53aee98233e5eeb5807b962886f86d5367de2d2e60081032654c9d0b7c2f description=kube-system/coredns-66bc5c9577-8gphx/coredns id=aa0950e0-2ac9-45b4-9fb3-550be510bd9b name=/runtime.v1.RuntimeService/StartContainer sandboxID=07d3214d9d2c563b6cfebdf9aacfe8d057b79eda98c18b31c3c7dde70e5903c9
	Dec 19 03:05:18 embed-certs-805185 crio[773]: time="2025-12-19T03:05:18.438170524Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7ae11ed8-b2cf-45da-afda-58ede016d063 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 19 03:05:18 embed-certs-805185 crio[773]: time="2025-12-19T03:05:18.43826366Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:18 embed-certs-805185 crio[773]: time="2025-12-19T03:05:18.444676205Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ce1d3a961a6ae37b9e719538867539cf150fc36cf5ad08f4c7dacc8fdcc702e8 UID:772c026a-4fb2-41ec-a206-d9daf7200d65 NetNS:/var/run/netns/aa78e785-eea4-4dcd-898c-f991265019be Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b6b0}] Aliases:map[]}"
	Dec 19 03:05:18 embed-certs-805185 crio[773]: time="2025-12-19T03:05:18.444740535Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 19 03:05:18 embed-certs-805185 crio[773]: time="2025-12-19T03:05:18.457519537Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ce1d3a961a6ae37b9e719538867539cf150fc36cf5ad08f4c7dacc8fdcc702e8 UID:772c026a-4fb2-41ec-a206-d9daf7200d65 NetNS:/var/run/netns/aa78e785-eea4-4dcd-898c-f991265019be Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b6b0}] Aliases:map[]}"
	Dec 19 03:05:18 embed-certs-805185 crio[773]: time="2025-12-19T03:05:18.457741679Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 19 03:05:18 embed-certs-805185 crio[773]: time="2025-12-19T03:05:18.458695822Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 19 03:05:18 embed-certs-805185 crio[773]: time="2025-12-19T03:05:18.459844732Z" level=info msg="Ran pod sandbox ce1d3a961a6ae37b9e719538867539cf150fc36cf5ad08f4c7dacc8fdcc702e8 with infra container: default/busybox/POD" id=7ae11ed8-b2cf-45da-afda-58ede016d063 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 19 03:05:18 embed-certs-805185 crio[773]: time="2025-12-19T03:05:18.461299627Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1fd2fcfd-8383-4fe5-af36-11932b599ab6 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:18 embed-certs-805185 crio[773]: time="2025-12-19T03:05:18.461434407Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=1fd2fcfd-8383-4fe5-af36-11932b599ab6 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:18 embed-certs-805185 crio[773]: time="2025-12-19T03:05:18.461480277Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=1fd2fcfd-8383-4fe5-af36-11932b599ab6 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:18 embed-certs-805185 crio[773]: time="2025-12-19T03:05:18.462135683Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b469183a-6f5b-4667-9cbd-7fc651c91917 name=/runtime.v1.ImageService/PullImage
	Dec 19 03:05:18 embed-certs-805185 crio[773]: time="2025-12-19T03:05:18.463734616Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 19 03:05:19 embed-certs-805185 crio[773]: time="2025-12-19T03:05:19.732873277Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=b469183a-6f5b-4667-9cbd-7fc651c91917 name=/runtime.v1.ImageService/PullImage
	Dec 19 03:05:19 embed-certs-805185 crio[773]: time="2025-12-19T03:05:19.733584677Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=db79cfa0-bd1b-4182-8383-6a0fd46aa5b6 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:19 embed-certs-805185 crio[773]: time="2025-12-19T03:05:19.735054821Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=71644206-f346-4c3b-9a25-70153f3a06de name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:19 embed-certs-805185 crio[773]: time="2025-12-19T03:05:19.738431632Z" level=info msg="Creating container: default/busybox/busybox" id=43610949-8f24-467a-b211-c045e6cf1b7e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:19 embed-certs-805185 crio[773]: time="2025-12-19T03:05:19.738680601Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:19 embed-certs-805185 crio[773]: time="2025-12-19T03:05:19.743391818Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:19 embed-certs-805185 crio[773]: time="2025-12-19T03:05:19.743981967Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:19 embed-certs-805185 crio[773]: time="2025-12-19T03:05:19.781578387Z" level=info msg="Created container 6ce150fcf95182cccdcac2f1357aea488ee36b8144c888055dbe2ff7b0e89080: default/busybox/busybox" id=43610949-8f24-467a-b211-c045e6cf1b7e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:19 embed-certs-805185 crio[773]: time="2025-12-19T03:05:19.78232817Z" level=info msg="Starting container: 6ce150fcf95182cccdcac2f1357aea488ee36b8144c888055dbe2ff7b0e89080" id=961b102e-902b-40f3-99db-fa2ef4e94b5c name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:05:19 embed-certs-805185 crio[773]: time="2025-12-19T03:05:19.785201412Z" level=info msg="Started container" PID=1967 containerID=6ce150fcf95182cccdcac2f1357aea488ee36b8144c888055dbe2ff7b0e89080 description=default/busybox/busybox id=961b102e-902b-40f3-99db-fa2ef4e94b5c name=/runtime.v1.RuntimeService/StartContainer sandboxID=ce1d3a961a6ae37b9e719538867539cf150fc36cf5ad08f4c7dacc8fdcc702e8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	6ce150fcf9518       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   ce1d3a961a6ae       busybox                                      default
	adfc53aee9823       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   07d3214d9d2c5       coredns-66bc5c9577-8gphx                     kube-system
	85748570821c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   4a3f34fe1c958       storage-provisioner                          kube-system
	34f48638a09a4       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    23 seconds ago      Running             kindnet-cni               0                   67f6f58a8936f       kindnet-jj9ms                                kube-system
	275c18db65f88       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                      24 seconds ago      Running             kube-proxy                0                   5b19e615e576a       kube-proxy-p8pqg                             kube-system
	d0973571518fd       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                      34 seconds ago      Running             kube-apiserver            0                   d25e8d41aa305       kube-apiserver-embed-certs-805185            kube-system
	c38d61fac9186       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                      34 seconds ago      Running             kube-scheduler            0                   3fc4f841f748f       kube-scheduler-embed-certs-805185            kube-system
	2c730354bd038       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      34 seconds ago      Running             etcd                      0                   7f5f0d251493f       etcd-embed-certs-805185                      kube-system
	fb5b4b91b9a41       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                      34 seconds ago      Running             kube-controller-manager   0                   134aee92b0a93       kube-controller-manager-embed-certs-805185   kube-system
	
	
	==> coredns [adfc53aee98233e5eeb5807b962886f86d5367de2d2e60081032654c9d0b7c2f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59336 - 32196 "HINFO IN 928327766711512777.3964709963108490138. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.020833894s
	
	
	==> describe nodes <==
	Name:               embed-certs-805185
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-805185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=embed-certs-805185
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_04_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:04:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-805185
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:05:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:05:27 +0000   Fri, 19 Dec 2025 03:04:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:05:27 +0000   Fri, 19 Dec 2025 03:04:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:05:27 +0000   Fri, 19 Dec 2025 03:04:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:05:27 +0000   Fri, 19 Dec 2025 03:05:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-805185
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                e529c61b-35ad-4151-ab38-525026482d8c
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-8gphx                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-805185                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-jj9ms                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-embed-certs-805185             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-embed-certs-805185    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-p8pqg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-embed-certs-805185             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node embed-certs-805185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node embed-certs-805185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node embed-certs-805185 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node embed-certs-805185 event: Registered Node embed-certs-805185 in Controller
	  Normal  NodeReady                12s   kubelet          Node embed-certs-805185 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [2c730354bd0384907d63f75f28e51f5252fa4398d973699953a7bbb5f39d5916] <==
	{"level":"warn","ts":"2025-12-19T03:04:54.035894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.044660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.052123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.059100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.067293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.078518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.085870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.092369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.099698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.106851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.113614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.121807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.130065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.136742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.143731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.151108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.157683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.164350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.171168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.177969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.184771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.201033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.207800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:54.214653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:21.635294Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.815188ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597820640267604 > lease_revoke:<id:06ed9b3491400cae>","response":"size:28"}
	
	
	==> kernel <==
	 03:05:27 up 47 min,  0 user,  load average: 8.27, 4.77, 2.83
	Linux embed-certs-805185 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [34f48638a09a4c93f3d8ae74dbe4ec1a62840150e98b4e02a14af78d405d10a3] <==
	I1219 03:05:04.857113       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1219 03:05:04.872290       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1219 03:05:04.872459       1 main.go:148] setting mtu 1500 for CNI 
	I1219 03:05:04.872484       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 03:05:04.872506       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T03:05:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 03:05:05.116296       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 03:05:05.116516       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 03:05:05.116531       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 03:05:05.116686       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 03:05:05.616744       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 03:05:05.616775       1 metrics.go:72] Registering metrics
	I1219 03:05:05.616865       1 controller.go:711] "Syncing nftables rules"
	I1219 03:05:15.118824       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:05:15.118864       1 main.go:301] handling current node
	I1219 03:05:25.118884       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:05:25.118927       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d0973571518fd32440ca550b8897ddb6ac28f9934e1f6de37985ef2ec01fb747] <==
	I1219 03:04:54.758555       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1219 03:04:54.798582       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1219 03:04:54.845184       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 03:04:54.848647       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:04:54.848785       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1219 03:04:54.854407       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:04:54.854669       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1219 03:04:54.955723       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1219 03:04:55.647832       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1219 03:04:55.651915       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1219 03:04:55.651939       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1219 03:04:56.155004       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:04:56.196246       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:04:56.251642       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1219 03:04:56.257842       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1219 03:04:56.259169       1 controller.go:667] quota admission added evaluator for: endpoints
	I1219 03:04:56.263680       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:04:56.697243       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1219 03:04:57.293472       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 03:04:57.302052       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1219 03:04:57.309198       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1219 03:05:02.303273       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:05:02.307162       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:05:02.399216       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1219 03:05:02.450056       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [fb5b4b91b9a4158f1c1da108c6e9dc5ef861c09ca0c3bc8ba972ca1d68e2ecc1] <==
	I1219 03:05:01.691344       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1219 03:05:01.696018       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1219 03:05:01.696040       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1219 03:05:01.696138       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1219 03:05:01.697284       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1219 03:05:01.697305       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1219 03:05:01.697328       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1219 03:05:01.697404       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1219 03:05:01.697424       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 03:05:01.697451       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1219 03:05:01.697543       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1219 03:05:01.697550       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1219 03:05:01.697571       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1219 03:05:01.697627       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1219 03:05:01.697544       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1219 03:05:01.698173       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 03:05:01.698943       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1219 03:05:01.699082       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 03:05:01.701487       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1219 03:05:01.703733       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:05:01.705930       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 03:05:01.709284       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1219 03:05:01.716002       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1219 03:05:01.723315       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:05:16.693863       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [275c18db65f886526113db8d0d3a3d3937a0e86679caec5680120fea585d05d4] <==
	I1219 03:05:03.415454       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:05:03.494590       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:05:03.594806       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:05:03.594845       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1219 03:05:03.595016       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:05:03.625895       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:05:03.625956       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:05:03.639688       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:05:03.641271       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:05:03.641299       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:03.642952       1 config.go:200] "Starting service config controller"
	I1219 03:05:03.642965       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:05:03.643061       1 config.go:309] "Starting node config controller"
	I1219 03:05:03.643068       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:05:03.643076       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:05:03.643323       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:05:03.643334       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:05:03.643378       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:05:03.643384       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:05:03.744323       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:05:03.744333       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:05:03.744373       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c38d61fac9186e28d6c6515fa20273127f3f226f19ff069b57b66b53070e3fee] <==
	E1219 03:04:54.709655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1219 03:04:54.709737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1219 03:04:54.711172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 03:04:54.711251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 03:04:54.711249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1219 03:04:54.711524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 03:04:54.711530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1219 03:04:54.711668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 03:04:54.711771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1219 03:04:54.711795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1219 03:04:54.711954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 03:04:54.712232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 03:04:54.712326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 03:04:55.582423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 03:04:55.695945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 03:04:55.741902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 03:04:55.759608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 03:04:55.829894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1219 03:04:55.866265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 03:04:55.886846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 03:04:55.912430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 03:04:55.942549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1219 03:04:55.969767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1219 03:04:56.031804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1219 03:04:57.705688       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 03:05:02 embed-certs-805185 kubelet[1308]: I1219 03:05:02.445876    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbh8f\" (UniqueName: \"kubernetes.io/projected/0bbe467b-7501-4a75-93bb-b1c33a1da403-kube-api-access-fbh8f\") pod \"kube-proxy-p8pqg\" (UID: \"0bbe467b-7501-4a75-93bb-b1c33a1da403\") " pod="kube-system/kube-proxy-p8pqg"
	Dec 19 03:05:02 embed-certs-805185 kubelet[1308]: I1219 03:05:02.445905    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d0e51745-1c64-48ae-b569-6a0f1017cc8d-cni-cfg\") pod \"kindnet-jj9ms\" (UID: \"d0e51745-1c64-48ae-b569-6a0f1017cc8d\") " pod="kube-system/kindnet-jj9ms"
	Dec 19 03:05:02 embed-certs-805185 kubelet[1308]: I1219 03:05:02.445992    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0e51745-1c64-48ae-b569-6a0f1017cc8d-lib-modules\") pod \"kindnet-jj9ms\" (UID: \"d0e51745-1c64-48ae-b569-6a0f1017cc8d\") " pod="kube-system/kindnet-jj9ms"
	Dec 19 03:05:02 embed-certs-805185 kubelet[1308]: I1219 03:05:02.446056    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bbe467b-7501-4a75-93bb-b1c33a1da403-lib-modules\") pod \"kube-proxy-p8pqg\" (UID: \"0bbe467b-7501-4a75-93bb-b1c33a1da403\") " pod="kube-system/kube-proxy-p8pqg"
	Dec 19 03:05:02 embed-certs-805185 kubelet[1308]: I1219 03:05:02.446085    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0bbe467b-7501-4a75-93bb-b1c33a1da403-kube-proxy\") pod \"kube-proxy-p8pqg\" (UID: \"0bbe467b-7501-4a75-93bb-b1c33a1da403\") " pod="kube-system/kube-proxy-p8pqg"
	Dec 19 03:05:02 embed-certs-805185 kubelet[1308]: I1219 03:05:02.446108    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0bbe467b-7501-4a75-93bb-b1c33a1da403-xtables-lock\") pod \"kube-proxy-p8pqg\" (UID: \"0bbe467b-7501-4a75-93bb-b1c33a1da403\") " pod="kube-system/kube-proxy-p8pqg"
	Dec 19 03:05:02 embed-certs-805185 kubelet[1308]: I1219 03:05:02.446131    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0e51745-1c64-48ae-b569-6a0f1017cc8d-xtables-lock\") pod \"kindnet-jj9ms\" (UID: \"d0e51745-1c64-48ae-b569-6a0f1017cc8d\") " pod="kube-system/kindnet-jj9ms"
	Dec 19 03:05:02 embed-certs-805185 kubelet[1308]: E1219 03:05:02.553083    1308 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 19 03:05:02 embed-certs-805185 kubelet[1308]: E1219 03:05:02.553136    1308 projected.go:196] Error preparing data for projected volume kube-api-access-fbh8f for pod kube-system/kube-proxy-p8pqg: configmap "kube-root-ca.crt" not found
	Dec 19 03:05:02 embed-certs-805185 kubelet[1308]: E1219 03:05:02.553278    1308 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0bbe467b-7501-4a75-93bb-b1c33a1da403-kube-api-access-fbh8f podName:0bbe467b-7501-4a75-93bb-b1c33a1da403 nodeName:}" failed. No retries permitted until 2025-12-19 03:05:03.05323974 +0000 UTC m=+6.014213860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fbh8f" (UniqueName: "kubernetes.io/projected/0bbe467b-7501-4a75-93bb-b1c33a1da403-kube-api-access-fbh8f") pod "kube-proxy-p8pqg" (UID: "0bbe467b-7501-4a75-93bb-b1c33a1da403") : configmap "kube-root-ca.crt" not found
	Dec 19 03:05:02 embed-certs-805185 kubelet[1308]: E1219 03:05:02.553522    1308 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 19 03:05:02 embed-certs-805185 kubelet[1308]: E1219 03:05:02.553554    1308 projected.go:196] Error preparing data for projected volume kube-api-access-c785d for pod kube-system/kindnet-jj9ms: configmap "kube-root-ca.crt" not found
	Dec 19 03:05:02 embed-certs-805185 kubelet[1308]: E1219 03:05:02.553638    1308 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d0e51745-1c64-48ae-b569-6a0f1017cc8d-kube-api-access-c785d podName:d0e51745-1c64-48ae-b569-6a0f1017cc8d nodeName:}" failed. No retries permitted until 2025-12-19 03:05:03.053617215 +0000 UTC m=+6.014591323 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c785d" (UniqueName: "kubernetes.io/projected/d0e51745-1c64-48ae-b569-6a0f1017cc8d-kube-api-access-c785d") pod "kindnet-jj9ms" (UID: "d0e51745-1c64-48ae-b569-6a0f1017cc8d") : configmap "kube-root-ca.crt" not found
	Dec 19 03:05:05 embed-certs-805185 kubelet[1308]: I1219 03:05:05.176217    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p8pqg" podStartSLOduration=3.176183037 podStartE2EDuration="3.176183037s" podCreationTimestamp="2025-12-19 03:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 03:05:04.170254677 +0000 UTC m=+7.131228795" watchObservedRunningTime="2025-12-19 03:05:05.176183037 +0000 UTC m=+8.137157154"
	Dec 19 03:05:05 embed-certs-805185 kubelet[1308]: I1219 03:05:05.479956    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-jj9ms" podStartSLOduration=2.186242361 podStartE2EDuration="3.479933132s" podCreationTimestamp="2025-12-19 03:05:02 +0000 UTC" firstStartedPulling="2025-12-19 03:05:03.330657749 +0000 UTC m=+6.291631863" lastFinishedPulling="2025-12-19 03:05:04.624348529 +0000 UTC m=+7.585322634" observedRunningTime="2025-12-19 03:05:05.177471975 +0000 UTC m=+8.138446091" watchObservedRunningTime="2025-12-19 03:05:05.479933132 +0000 UTC m=+8.440907250"
	Dec 19 03:05:15 embed-certs-805185 kubelet[1308]: I1219 03:05:15.314996    1308 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 19 03:05:15 embed-certs-805185 kubelet[1308]: I1219 03:05:15.443144    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt7jz\" (UniqueName: \"kubernetes.io/projected/a7c7ec9b-ed70-43dd-aa6c-c365da9d4588-kube-api-access-qt7jz\") pod \"storage-provisioner\" (UID: \"a7c7ec9b-ed70-43dd-aa6c-c365da9d4588\") " pod="kube-system/storage-provisioner"
	Dec 19 03:05:15 embed-certs-805185 kubelet[1308]: I1219 03:05:15.443192    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a7c7ec9b-ed70-43dd-aa6c-c365da9d4588-tmp\") pod \"storage-provisioner\" (UID: \"a7c7ec9b-ed70-43dd-aa6c-c365da9d4588\") " pod="kube-system/storage-provisioner"
	Dec 19 03:05:15 embed-certs-805185 kubelet[1308]: I1219 03:05:15.443211    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ddec921-4727-4a79-b09e-05dfa120cad9-config-volume\") pod \"coredns-66bc5c9577-8gphx\" (UID: \"4ddec921-4727-4a79-b09e-05dfa120cad9\") " pod="kube-system/coredns-66bc5c9577-8gphx"
	Dec 19 03:05:15 embed-certs-805185 kubelet[1308]: I1219 03:05:15.443240    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxrnh\" (UniqueName: \"kubernetes.io/projected/4ddec921-4727-4a79-b09e-05dfa120cad9-kube-api-access-fxrnh\") pod \"coredns-66bc5c9577-8gphx\" (UID: \"4ddec921-4727-4a79-b09e-05dfa120cad9\") " pod="kube-system/coredns-66bc5c9577-8gphx"
	Dec 19 03:05:16 embed-certs-805185 kubelet[1308]: I1219 03:05:16.213970    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8gphx" podStartSLOduration=14.213932272 podStartE2EDuration="14.213932272s" podCreationTimestamp="2025-12-19 03:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 03:05:16.213768504 +0000 UTC m=+19.174742621" watchObservedRunningTime="2025-12-19 03:05:16.213932272 +0000 UTC m=+19.174906388"
	Dec 19 03:05:16 embed-certs-805185 kubelet[1308]: I1219 03:05:16.237309    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.237286264 podStartE2EDuration="13.237286264s" podCreationTimestamp="2025-12-19 03:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 03:05:16.23713695 +0000 UTC m=+19.198111067" watchObservedRunningTime="2025-12-19 03:05:16.237286264 +0000 UTC m=+19.198260382"
	Dec 19 03:05:18 embed-certs-805185 kubelet[1308]: I1219 03:05:18.163172    1308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr87t\" (UniqueName: \"kubernetes.io/projected/772c026a-4fb2-41ec-a206-d9daf7200d65-kube-api-access-dr87t\") pod \"busybox\" (UID: \"772c026a-4fb2-41ec-a206-d9daf7200d65\") " pod="default/busybox"
	Dec 19 03:05:20 embed-certs-805185 kubelet[1308]: I1219 03:05:20.229214    1308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.956444867 podStartE2EDuration="2.229187308s" podCreationTimestamp="2025-12-19 03:05:18 +0000 UTC" firstStartedPulling="2025-12-19 03:05:18.461730079 +0000 UTC m=+21.422704191" lastFinishedPulling="2025-12-19 03:05:19.734472536 +0000 UTC m=+22.695446632" observedRunningTime="2025-12-19 03:05:20.229146173 +0000 UTC m=+23.190120309" watchObservedRunningTime="2025-12-19 03:05:20.229187308 +0000 UTC m=+23.190161425"
	Dec 19 03:05:26 embed-certs-805185 kubelet[1308]: E1219 03:05:26.251095    1308 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48848->127.0.0.1:43597: write tcp 127.0.0.1:48848->127.0.0.1:43597: write: connection reset by peer
	
	
	==> storage-provisioner [85748570821c3f51cafc2fb902b00b9434967c71f279cddcfbe17c50c61785e2] <==
	I1219 03:05:15.745397       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1219 03:05:15.762407       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1219 03:05:15.762838       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1219 03:05:15.766561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:15.773395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1219 03:05:15.773689       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1219 03:05:15.773980       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-805185_07bf9b2c-2afb-4fa0-ac10-29bb86d1f5e0!
	I1219 03:05:15.773845       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0455c614-c7de-4422-8471-95971b74dc6a", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-805185_07bf9b2c-2afb-4fa0-ac10-29bb86d1f5e0 became leader
	W1219 03:05:15.777288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:15.782987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1219 03:05:15.874445       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-805185_07bf9b2c-2afb-4fa0-ac10-29bb86d1f5e0!
	W1219 03:05:17.787024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:17.791880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:19.796099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:19.801011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:21.805118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:21.809211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:23.813550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:23.854364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:25.859091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:25.864627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:27.869447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:27.877736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-805185 -n embed-certs-805185
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-805185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-717222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-717222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (471.398335ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:05:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-717222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-717222 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-717222 describe deploy/metrics-server -n kube-system: exit status 1 (74.949749ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-717222 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-717222
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-717222:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59",
	        "Created": "2025-12-19T03:04:47.206515223Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335357,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:04:47.243667303Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59/hostname",
	        "HostsPath": "/var/lib/docker/containers/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59/hosts",
	        "LogPath": "/var/lib/docker/containers/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59-json.log",
	        "Name": "/default-k8s-diff-port-717222",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-717222:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-717222",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59",
	                "LowerDir": "/var/lib/docker/overlay2/73dc0bcaa024ed1b32e53abc59282bb71441b29d555c8c9ed18118be21650e76-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/73dc0bcaa024ed1b32e53abc59282bb71441b29d555c8c9ed18118be21650e76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/73dc0bcaa024ed1b32e53abc59282bb71441b29d555c8c9ed18118be21650e76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/73dc0bcaa024ed1b32e53abc59282bb71441b29d555c8c9ed18118be21650e76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-717222",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-717222/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-717222",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-717222",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-717222",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d789e8a6e69862a54b8fa107a8c77353945a818abe8a6604da69a6e3e72df6a9",
	            "SandboxKey": "/var/run/docker/netns/d789e8a6e698",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-717222": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "61bece957d17b845e006f35e9e337693d4d396daf2e4f93e70692be3f3288cbb",
	                    "EndpointID": "21ba108cbb00484ffcad25ed07b98e49ee8b8405f70752198248da82f0b1a033",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ae:68:53:4f:27:85",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-717222",
	                        "f8284300a033"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-717222 -n default-k8s-diff-port-717222
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-717222 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-717222 logs -n 25: (1.483381508s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-821749 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                             │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ delete  │ -p calico-821749                                                                                                                                                                                                                              │ calico-821749                │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl cat containerd --no-pager                                                                                                                                                                             │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                      │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo cat /etc/containerd/config.toml                                                                                                                                                                                 │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo containerd config dump                                                                                                                                                                                          │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo crio config                                                                                                                                                                                                     │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ delete  │ -p custom-flannel-821749                                                                                                                                                                                                                      │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-433330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-278042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ delete  │ -p disable-driver-mounts-507648                                                                                                                                                                                                               │ disable-driver-mounts-507648 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ stop    │ -p old-k8s-version-433330 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-278042 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-433330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-433330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-278042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-278042 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-805185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p embed-certs-805185 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-717222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:05:00
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:05:00.811825  338816 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:05:00.812140  338816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:00.812154  338816 out.go:374] Setting ErrFile to fd 2...
	I1219 03:05:00.812160  338816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:00.812448  338816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:05:00.813127  338816 out.go:368] Setting JSON to false
	I1219 03:05:00.814745  338816 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2852,"bootTime":1766110649,"procs":362,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:05:00.814830  338816 start.go:143] virtualization: kvm guest
	I1219 03:05:00.816988  338816 out.go:179] * [no-preload-278042] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:05:00.818543  338816 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:05:00.818545  338816 notify.go:221] Checking for updates...
	I1219 03:05:00.819816  338816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:05:00.821563  338816 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:00.822827  338816 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:05:00.824185  338816 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:05:00.825466  338816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:05:00.831448  338816 config.go:182] Loaded profile config "no-preload-278042": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:05:00.832147  338816 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:05:00.862204  338816 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:05:00.862370  338816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:00.934433  338816 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-19 03:05:00.923077867 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:00.934586  338816 docker.go:319] overlay module found
	I1219 03:05:00.937154  338816 out.go:179] * Using the docker driver based on existing profile
	I1219 03:05:00.938308  338816 start.go:309] selected driver: docker
	I1219 03:05:00.938324  338816 start.go:928] validating driver "docker" against &{Name:no-preload-278042 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-278042 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:00.938428  338816 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:05:00.939209  338816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:00.997378  338816 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-12-19 03:05:00.987371827 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:00.997667  338816 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:00.997838  338816 cni.go:84] Creating CNI manager for ""
	I1219 03:05:00.997920  338816 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:00.997990  338816 start.go:353] cluster config:
	{Name:no-preload-278042 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-278042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:00.999840  338816 out.go:179] * Starting "no-preload-278042" primary control-plane node in "no-preload-278042" cluster
	I1219 03:05:01.000906  338816 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:05:01.002150  338816 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:05:01.003607  338816 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:05:01.003688  338816 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:05:01.003736  338816 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/config.json ...
	I1219 03:05:01.003846  338816 cache.go:107] acquiring lock: {Name:mk4042534b37a078c50eadc76c3b72fca5a085d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:01.003887  338816 cache.go:107] acquiring lock: {Name:mke7e35bfc025797e6268aab9dc90b26c2336a31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:01.003922  338816 cache.go:107] acquiring lock: {Name:mka478e3dbccb645e168d75bf94249d243927311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:01.003971  338816 cache.go:115] /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1219 03:05:01.003982  338816 cache.go:115] /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1219 03:05:01.003988  338816 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 155.716µs
	I1219 03:05:01.004001  338816 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1219 03:05:01.003992  338816 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 76.157µs
	I1219 03:05:01.003978  338816 cache.go:115] /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1219 03:05:01.003971  338816 cache.go:107] acquiring lock: {Name:mk10e329e7d25506a0a2796935772d4b44680659 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:01.004021  338816 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 145.653µs
	I1219 03:05:01.003992  338816 cache.go:107] acquiring lock: {Name:mk5e69a6045de9e68e630d6b1379ad8de497ff66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:01.004032  338816 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1219 03:05:01.004011  338816 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1219 03:05:01.004032  338816 cache.go:107] acquiring lock: {Name:mk78bf1c270ef16831276830937a4a078010d84e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:01.004065  338816 cache.go:115] /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1219 03:05:01.004067  338816 cache.go:115] /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1219 03:05:01.004070  338816 cache.go:115] /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1219 03:05:01.004075  338816 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 124.696µs
	I1219 03:05:01.004090  338816 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1219 03:05:01.004071  338816 cache.go:107] acquiring lock: {Name:mkc144e93b6fbc13859f0863c101486208e08799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:01.004101  338816 cache.go:107] acquiring lock: {Name:mkf0879b2828224e5a74fc93c99778fb85bb6a55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:01.004184  338816 cache.go:115] /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1219 03:05:01.004210  338816 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 197.6µs
	I1219 03:05:01.004235  338816 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1219 03:05:01.004077  338816 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 87.539µs
	I1219 03:05:01.004251  338816 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1219 03:05:01.004078  338816 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 48.527µs
	I1219 03:05:01.004267  338816 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1219 03:05:01.004267  338816 cache.go:115] /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1219 03:05:01.004279  338816 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 228.447µs
	I1219 03:05:01.004301  338816 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22230-4987/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1219 03:05:01.004316  338816 cache.go:87] Successfully saved all images to host disk.
	I1219 03:05:01.024229  338816 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:05:01.024248  338816 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:05:01.024265  338816 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:05:01.024298  338816 start.go:360] acquireMachinesLock for no-preload-278042: {Name:mk30a7004ec2e933aa2c562456de05f69cb301f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:01.024357  338816 start.go:364] duration metric: took 39.381µs to acquireMachinesLock for "no-preload-278042"
	I1219 03:05:01.024380  338816 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:05:01.024388  338816 fix.go:54] fixHost starting: 
	I1219 03:05:01.024583  338816 cli_runner.go:164] Run: docker container inspect no-preload-278042 --format={{.State.Status}}
	I1219 03:05:01.041773  338816 fix.go:112] recreateIfNeeded on no-preload-278042: state=Stopped err=<nil>
	W1219 03:05:01.041804  338816 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:04:57.915278  330835 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1219 03:04:57.920246  330835 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1219 03:04:57.920265  330835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1219 03:04:57.933366  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1219 03:04:58.189220  330835 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:04:58.189418  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:04:58.189493  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-805185 minikube.k8s.io/updated_at=2025_12_19T03_04_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6 minikube.k8s.io/name=embed-certs-805185 minikube.k8s.io/primary=true
	I1219 03:04:58.204490  330835 ops.go:34] apiserver oom_adj: -16
	I1219 03:04:58.298969  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:04:58.800036  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:04:59.299490  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:04:59.799880  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:00.299637  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:00.799931  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:01.299100  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:01.799101  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:02.299749  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:02.799440  330835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:02.869115  330835 kubeadm.go:1114] duration metric: took 4.679815393s to wait for elevateKubeSystemPrivileges
	I1219 03:05:02.869148  330835 kubeadm.go:403] duration metric: took 14.349960475s to StartCluster
	I1219 03:05:02.869180  330835 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:02.869249  330835 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:02.870257  330835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:02.870499  330835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1219 03:05:02.870553  330835 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:05:02.870619  330835 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:05:02.870730  330835 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-805185"
	I1219 03:05:02.870760  330835 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-805185"
	I1219 03:05:02.870772  330835 config.go:182] Loaded profile config "embed-certs-805185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:02.870782  330835 addons.go:70] Setting default-storageclass=true in profile "embed-certs-805185"
	I1219 03:05:02.870791  330835 host.go:66] Checking if "embed-certs-805185" exists ...
	I1219 03:05:02.870806  330835 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-805185"
	I1219 03:05:02.871180  330835 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:02.871327  330835 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:02.873715  330835 out.go:179] * Verifying Kubernetes components...
	I1219 03:05:02.874932  330835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:02.895189  330835 addons.go:239] Setting addon default-storageclass=true in "embed-certs-805185"
	I1219 03:05:02.895251  330835 host.go:66] Checking if "embed-certs-805185" exists ...
	I1219 03:05:02.895421  330835 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:05:03.329138  332512 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1219 03:05:03.329210  332512 kubeadm.go:319] [preflight] Running pre-flight checks
	I1219 03:05:03.329320  332512 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1219 03:05:03.329406  332512 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1219 03:05:03.329461  332512 kubeadm.go:319] OS: Linux
	I1219 03:05:03.329527  332512 kubeadm.go:319] CGROUPS_CPU: enabled
	I1219 03:05:03.329596  332512 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1219 03:05:03.329668  332512 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1219 03:05:03.329741  332512 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1219 03:05:03.329814  332512 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1219 03:05:03.329878  332512 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1219 03:05:03.329940  332512 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1219 03:05:03.330004  332512 kubeadm.go:319] CGROUPS_IO: enabled
	I1219 03:05:03.330095  332512 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1219 03:05:03.330220  332512 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1219 03:05:03.330337  332512 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1219 03:05:03.330416  332512 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1219 03:05:03.332609  332512 out.go:252]   - Generating certificates and keys ...
	I1219 03:05:03.332745  332512 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1219 03:05:03.332839  332512 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1219 03:05:03.332918  332512 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1219 03:05:03.333010  332512 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1219 03:05:03.333086  332512 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1219 03:05:03.333154  332512 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1219 03:05:03.333225  332512 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1219 03:05:03.333420  332512 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-717222 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1219 03:05:03.333526  332512 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1219 03:05:03.333723  332512 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-717222 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1219 03:05:03.333830  332512 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1219 03:05:03.333912  332512 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1219 03:05:03.333972  332512 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1219 03:05:03.334049  332512 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1219 03:05:03.334105  332512 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1219 03:05:03.334170  332512 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1219 03:05:03.334233  332512 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1219 03:05:03.334319  332512 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1219 03:05:03.334383  332512 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1219 03:05:03.334471  332512 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1219 03:05:03.334546  332512 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1219 03:05:03.336445  332512 out.go:252]   - Booting up control plane ...
	I1219 03:05:03.336544  332512 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1219 03:05:03.336637  332512 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1219 03:05:03.336726  332512 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1219 03:05:03.336836  332512 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1219 03:05:03.336933  332512 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1219 03:05:03.337051  332512 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1219 03:05:03.337160  332512 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1219 03:05:03.337210  332512 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1219 03:05:03.337389  332512 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1219 03:05:03.337514  332512 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1219 03:05:03.337589  332512 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001193265s
	I1219 03:05:03.337722  332512 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1219 03:05:03.337824  332512 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I1219 03:05:03.337923  332512 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1219 03:05:03.338017  332512 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1219 03:05:03.338094  332512 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.923604324s
	I1219 03:05:03.338162  332512 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.718802088s
	I1219 03:05:03.338231  332512 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501648314s
	I1219 03:05:03.338348  332512 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1219 03:05:03.338487  332512 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1219 03:05:03.338561  332512 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1219 03:05:03.338818  332512 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-717222 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1219 03:05:03.338881  332512 kubeadm.go:319] [bootstrap-token] Using token: 777u07.wfsafuhj1sljp45a
	I1219 03:05:02.895778  330835 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:02.896665  330835 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:02.896684  330835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:05:02.896783  330835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:02.927058  330835 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:02.927082  330835 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:05:02.927138  330835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:02.929643  330835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:02.952985  330835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:02.976888  330835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1219 03:05:03.018033  330835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:03.053612  330835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:03.071120  330835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:03.143029  330835 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1219 03:05:03.144016  330835 node_ready.go:35] waiting up to 6m0s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:03.341793  332512 out.go:252]   - Configuring RBAC rules ...
	I1219 03:05:03.341945  332512 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1219 03:05:03.342060  332512 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1219 03:05:03.342250  332512 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1219 03:05:03.342430  332512 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1219 03:05:03.342593  332512 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1219 03:05:03.342722  332512 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1219 03:05:03.342910  332512 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1219 03:05:03.342981  332512 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1219 03:05:03.343054  332512 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1219 03:05:03.343064  332512 kubeadm.go:319] 
	I1219 03:05:03.343150  332512 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1219 03:05:03.343161  332512 kubeadm.go:319] 
	I1219 03:05:03.343284  332512 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1219 03:05:03.343298  332512 kubeadm.go:319] 
	I1219 03:05:03.343319  332512 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1219 03:05:03.343390  332512 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1219 03:05:03.343482  332512 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1219 03:05:03.343500  332512 kubeadm.go:319] 
	I1219 03:05:03.343575  332512 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1219 03:05:03.343583  332512 kubeadm.go:319] 
	I1219 03:05:03.343645  332512 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1219 03:05:03.343655  332512 kubeadm.go:319] 
	I1219 03:05:03.343746  332512 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1219 03:05:03.343839  332512 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1219 03:05:03.343939  332512 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1219 03:05:03.343945  332512 kubeadm.go:319] 
	I1219 03:05:03.344050  332512 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1219 03:05:03.344146  332512 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1219 03:05:03.344151  332512 kubeadm.go:319] 
	I1219 03:05:03.344251  332512 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token 777u07.wfsafuhj1sljp45a \
	I1219 03:05:03.344378  332512 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e8b10e60f3db527579d1c34bb8f4e490eb8eee3e7862dee81a2c160635afa3a8 \
	I1219 03:05:03.344403  332512 kubeadm.go:319] 	--control-plane 
	I1219 03:05:03.344408  332512 kubeadm.go:319] 
	I1219 03:05:03.344513  332512 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1219 03:05:03.344517  332512 kubeadm.go:319] 
	I1219 03:05:03.344621  332512 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token 777u07.wfsafuhj1sljp45a \
	I1219 03:05:03.344776  332512 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e8b10e60f3db527579d1c34bb8f4e490eb8eee3e7862dee81a2c160635afa3a8 
	I1219 03:05:03.344787  332512 cni.go:84] Creating CNI manager for ""
	I1219 03:05:03.344797  332512 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:03.345546  330835 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1219 03:05:03.346443  332512 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1219 03:05:00.102910  338195 out.go:252] * Restarting existing docker container for "old-k8s-version-433330" ...
	I1219 03:05:00.103221  338195 cli_runner.go:164] Run: docker start old-k8s-version-433330
	I1219 03:05:00.531550  338195 cli_runner.go:164] Run: docker container inspect old-k8s-version-433330 --format={{.State.Status}}
	I1219 03:05:00.554813  338195 kic.go:430] container "old-k8s-version-433330" state is running.
	I1219 03:05:00.555252  338195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-433330
	I1219 03:05:00.578791  338195 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/config.json ...
	I1219 03:05:00.579080  338195 machine.go:94] provisionDockerMachine start ...
	I1219 03:05:00.579155  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:00.602239  338195 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:00.602473  338195 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1219 03:05:00.602484  338195 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:05:00.603264  338195 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46560->127.0.0.1:33118: read: connection reset by peer
	I1219 03:05:03.773374  338195 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-433330
	
	I1219 03:05:03.773404  338195 ubuntu.go:182] provisioning hostname "old-k8s-version-433330"
	I1219 03:05:03.773469  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:03.800136  338195 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:03.800480  338195 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1219 03:05:03.800506  338195 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-433330 && echo "old-k8s-version-433330" | sudo tee /etc/hostname
	I1219 03:05:03.980642  338195 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-433330
	
	I1219 03:05:03.980825  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:04.012377  338195 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:04.012723  338195 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1219 03:05:04.012753  338195 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-433330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-433330/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-433330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:05:04.173876  338195 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:05:04.173906  338195 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 03:05:04.173949  338195 ubuntu.go:190] setting up certificates
	I1219 03:05:04.173969  338195 provision.go:84] configureAuth start
	I1219 03:05:04.174050  338195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-433330
	I1219 03:05:04.196880  338195 provision.go:143] copyHostCerts
	I1219 03:05:04.196933  338195 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem, removing ...
	I1219 03:05:04.196946  338195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem
	I1219 03:05:04.197007  338195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 03:05:04.197144  338195 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem, removing ...
	I1219 03:05:04.197157  338195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem
	I1219 03:05:04.197197  338195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 03:05:04.197309  338195 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem, removing ...
	I1219 03:05:04.197322  338195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem
	I1219 03:05:04.197364  338195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 03:05:04.197512  338195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-433330 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-433330]
	I1219 03:05:04.292658  338195 provision.go:177] copyRemoteCerts
	I1219 03:05:04.292750  338195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:05:04.292821  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:04.317808  338195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/old-k8s-version-433330/id_rsa Username:docker}
	I1219 03:05:04.433528  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:05:04.458555  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1219 03:05:04.482338  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 03:05:04.503353  338195 provision.go:87] duration metric: took 329.364483ms to configureAuth
	I1219 03:05:04.503391  338195 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:05:04.503595  338195 config.go:182] Loaded profile config "old-k8s-version-433330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1219 03:05:04.503785  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:04.524501  338195 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:04.524833  338195 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1219 03:05:04.524860  338195 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:05:01.043743  338816 out.go:252] * Restarting existing docker container for "no-preload-278042" ...
	I1219 03:05:01.043821  338816 cli_runner.go:164] Run: docker start no-preload-278042
	I1219 03:05:01.342435  338816 cli_runner.go:164] Run: docker container inspect no-preload-278042 --format={{.State.Status}}
	I1219 03:05:01.366361  338816 kic.go:430] container "no-preload-278042" state is running.
	I1219 03:05:01.366966  338816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-278042
	I1219 03:05:01.391891  338816 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/config.json ...
	I1219 03:05:01.392171  338816 machine.go:94] provisionDockerMachine start ...
	I1219 03:05:01.392261  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:01.415534  338816 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:01.415884  338816 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1219 03:05:01.415902  338816 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:05:01.416658  338816 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33222->127.0.0.1:33123: read: connection reset by peer
	I1219 03:05:04.580995  338816 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-278042
	
	I1219 03:05:04.581030  338816 ubuntu.go:182] provisioning hostname "no-preload-278042"
	I1219 03:05:04.581099  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:04.602209  338816 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:04.602494  338816 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1219 03:05:04.602513  338816 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-278042 && echo "no-preload-278042" | sudo tee /etc/hostname
	I1219 03:05:04.764912  338816 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-278042
	
	I1219 03:05:04.765012  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:04.786295  338816 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:04.786596  338816 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1219 03:05:04.786621  338816 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-278042' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-278042/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-278042' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:05:04.936767  338816 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:05:04.936797  338816 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 03:05:04.936824  338816 ubuntu.go:190] setting up certificates
	I1219 03:05:04.936837  338816 provision.go:84] configureAuth start
	I1219 03:05:04.936888  338816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-278042
	I1219 03:05:04.957845  338816 provision.go:143] copyHostCerts
	I1219 03:05:04.957920  338816 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem, removing ...
	I1219 03:05:04.957937  338816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem
	I1219 03:05:04.958000  338816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 03:05:04.958096  338816 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem, removing ...
	I1219 03:05:04.958106  338816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem
	I1219 03:05:04.958134  338816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 03:05:04.958186  338816 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem, removing ...
	I1219 03:05:04.958194  338816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem
	I1219 03:05:04.958218  338816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 03:05:04.958274  338816 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.no-preload-278042 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-278042]
	I1219 03:05:04.997492  338816 provision.go:177] copyRemoteCerts
	I1219 03:05:04.997548  338816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:05:04.997579  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:05.020166  338816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/no-preload-278042/id_rsa Username:docker}
	I1219 03:05:05.124551  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:05:05.143320  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1219 03:05:05.161475  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:05:05.184277  338816 provision.go:87] duration metric: took 247.423458ms to configureAuth
	I1219 03:05:05.184311  338816 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:05:05.184533  338816 config.go:182] Loaded profile config "no-preload-278042": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:05:05.184679  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:05.206856  338816 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:05.207191  338816 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1219 03:05:05.207233  338816 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:05:05.559981  338816 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:05:05.560014  338816 machine.go:97] duration metric: took 4.167815236s to provisionDockerMachine
	I1219 03:05:05.560030  338816 start.go:293] postStartSetup for "no-preload-278042" (driver="docker")
	I1219 03:05:05.560046  338816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:05:05.560113  338816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:05:05.560164  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:05.581839  338816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/no-preload-278042/id_rsa Username:docker}
	I1219 03:05:05.687357  338816 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:05:05.691365  338816 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:05:05.691398  338816 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:05:05.691411  338816 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 03:05:05.691474  338816 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 03:05:05.691587  338816 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem -> 85362.pem in /etc/ssl/certs
	I1219 03:05:05.691772  338816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:05:05.699679  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:05:05.719170  338816 start.go:296] duration metric: took 159.123492ms for postStartSetup
	I1219 03:05:05.719255  338816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:05:05.719295  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:05.740866  338816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/no-preload-278042/id_rsa Username:docker}
	I1219 03:05:05.037451  338195 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:05:05.037477  338195 machine.go:97] duration metric: took 4.458376726s to provisionDockerMachine
	I1219 03:05:05.037573  338195 start.go:293] postStartSetup for "old-k8s-version-433330" (driver="docker")
	I1219 03:05:05.037589  338195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:05:05.037644  338195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:05:05.037684  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:05.057696  338195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/old-k8s-version-433330/id_rsa Username:docker}
	I1219 03:05:05.159796  338195 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:05:05.164068  338195 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:05:05.164098  338195 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:05:05.164112  338195 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 03:05:05.164158  338195 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 03:05:05.164279  338195 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem -> 85362.pem in /etc/ssl/certs
	I1219 03:05:05.164418  338195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:05:05.173600  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:05:05.194256  338195 start.go:296] duration metric: took 156.665743ms for postStartSetup
	I1219 03:05:05.194339  338195 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:05:05.194411  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:05.214524  338195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/old-k8s-version-433330/id_rsa Username:docker}
	I1219 03:05:05.323851  338195 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:05:05.328539  338195 fix.go:56] duration metric: took 5.265541463s for fixHost
	I1219 03:05:05.328561  338195 start.go:83] releasing machines lock for "old-k8s-version-433330", held for 5.265588686s
	I1219 03:05:05.328620  338195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-433330
	I1219 03:05:05.348534  338195 ssh_runner.go:195] Run: cat /version.json
	I1219 03:05:05.348582  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:05.348648  338195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:05:05.348765  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:05.368174  338195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/old-k8s-version-433330/id_rsa Username:docker}
	I1219 03:05:05.369386  338195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/old-k8s-version-433330/id_rsa Username:docker}
	I1219 03:05:05.468618  338195 ssh_runner.go:195] Run: systemctl --version
	I1219 03:05:05.532150  338195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:05:05.572906  338195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:05:05.578881  338195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:05:05.578950  338195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:05:05.587647  338195 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:05:05.587672  338195 start.go:496] detecting cgroup driver to use...
	I1219 03:05:05.587715  338195 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:05:05.587762  338195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:05:05.602449  338195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:05:05.614770  338195 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:05:05.614837  338195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:05:05.629394  338195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:05:05.643068  338195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:05:05.732404  338195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:05:05.820424  338195 docker.go:234] disabling docker service ...
	I1219 03:05:05.820489  338195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:05:05.835029  338195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:05:05.849083  338195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:05:05.944949  338195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:05:06.047437  338195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:05:06.060193  338195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:05:06.075557  338195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1219 03:05:06.075622  338195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.084882  338195 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 03:05:06.084948  338195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.095824  338195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.105026  338195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.113861  338195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:05:06.122038  338195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.131004  338195 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.140211  338195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.149510  338195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:05:06.156919  338195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:05:06.164379  338195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:06.274569  338195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:05:06.430797  338195 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:05:06.430858  338195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:05:06.435163  338195 start.go:564] Will wait 60s for crictl version
	I1219 03:05:06.435236  338195 ssh_runner.go:195] Run: which crictl
	I1219 03:05:06.439145  338195 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:05:06.464904  338195 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 03:05:06.465012  338195 ssh_runner.go:195] Run: crio --version
	I1219 03:05:06.496536  338195 ssh_runner.go:195] Run: crio --version
	I1219 03:05:06.533933  338195 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1219 03:05:05.843426  338816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:05:05.848564  338816 fix.go:56] duration metric: took 4.824169182s for fixHost
	I1219 03:05:05.848597  338816 start.go:83] releasing machines lock for "no-preload-278042", held for 4.824225362s
	I1219 03:05:05.848657  338816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-278042
	I1219 03:05:05.867876  338816 ssh_runner.go:195] Run: cat /version.json
	I1219 03:05:05.867932  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:05.867985  338816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:05:05.868068  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:05.893780  338816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/no-preload-278042/id_rsa Username:docker}
	I1219 03:05:05.896543  338816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/no-preload-278042/id_rsa Username:docker}
	I1219 03:05:05.993630  338816 ssh_runner.go:195] Run: systemctl --version
	I1219 03:05:06.063431  338816 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:05:06.101038  338816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:05:06.106013  338816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:05:06.106085  338816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:05:06.113838  338816 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:05:06.113860  338816 start.go:496] detecting cgroup driver to use...
	I1219 03:05:06.113894  338816 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:05:06.113943  338816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:05:06.127982  338816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:05:06.142051  338816 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:05:06.142102  338816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:05:06.156354  338816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:05:06.170372  338816 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:05:06.270550  338816 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:05:06.360630  338816 docker.go:234] disabling docker service ...
	I1219 03:05:06.360733  338816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:05:06.376556  338816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:05:06.390325  338816 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:05:06.478371  338816 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:05:06.569492  338816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:05:06.583135  338816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:05:06.599607  338816 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:05:06.599657  338816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.610077  338816 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 03:05:06.610131  338816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.619771  338816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.630334  338816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.641094  338816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:05:06.650596  338816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.660930  338816 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.669881  338816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:06.680885  338816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:05:06.689816  338816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:05:06.697737  338816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:06.799123  338816 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:05:06.962131  338816 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:05:06.962209  338816 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:05:06.966939  338816 start.go:564] Will wait 60s for crictl version
	I1219 03:05:06.967023  338816 ssh_runner.go:195] Run: which crictl
	I1219 03:05:06.972096  338816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:05:07.007534  338816 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 03:05:07.007600  338816 ssh_runner.go:195] Run: crio --version
	I1219 03:05:07.038097  338816 ssh_runner.go:195] Run: crio --version
	I1219 03:05:07.069654  338816 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1219 03:05:03.348540  332512 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1219 03:05:03.353332  332512 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1219 03:05:03.353350  332512 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1219 03:05:03.367762  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1219 03:05:03.590042  332512 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:05:03.590142  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:03.590173  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-717222 minikube.k8s.io/updated_at=2025_12_19T03_05_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6 minikube.k8s.io/name=default-k8s-diff-port-717222 minikube.k8s.io/primary=true
	I1219 03:05:03.689380  332512 ops.go:34] apiserver oom_adj: -16
	I1219 03:05:03.689428  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:04.189758  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:04.689971  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:05.189749  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:05.689779  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:06.190046  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:06.689620  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:03.347245  330835 addons.go:546] duration metric: took 476.630979ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1219 03:05:03.650437  330835 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-805185" context rescaled to 1 replicas
	W1219 03:05:05.147152  330835 node_ready.go:57] node "embed-certs-805185" has "Ready":"False" status (will retry)
	W1219 03:05:07.147663  330835 node_ready.go:57] node "embed-certs-805185" has "Ready":"False" status (will retry)
	I1219 03:05:06.535159  338195 cli_runner.go:164] Run: docker network inspect old-k8s-version-433330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:05:06.554369  338195 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1219 03:05:06.558757  338195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:06.569671  338195 kubeadm.go:884] updating cluster {Name:old-k8s-version-433330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-433330 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:05:06.569838  338195 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1219 03:05:06.569903  338195 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:06.604287  338195 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:06.604328  338195 crio.go:433] Images already preloaded, skipping extraction
	I1219 03:05:06.604389  338195 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:06.631597  338195 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:06.631622  338195 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:05:06.631631  338195 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1219 03:05:06.631776  338195 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-433330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-433330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:05:06.631902  338195 ssh_runner.go:195] Run: crio config
	I1219 03:05:06.683350  338195 cni.go:84] Creating CNI manager for ""
	I1219 03:05:06.683378  338195 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:06.683395  338195 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:05:06.683426  338195 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-433330 NodeName:old-k8s-version-433330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:05:06.683581  338195 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-433330"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:05:06.683646  338195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1219 03:05:06.692391  338195 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:05:06.692452  338195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:05:06.700694  338195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1219 03:05:06.714596  338195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:05:06.732986  338195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1219 03:05:06.748885  338195 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:05:06.753762  338195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:06.765270  338195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:06.853466  338195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:06.879610  338195 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330 for IP: 192.168.76.2
	I1219 03:05:06.879634  338195 certs.go:195] generating shared ca certs ...
	I1219 03:05:06.879654  338195 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:06.879837  338195 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 03:05:06.879900  338195 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 03:05:06.879916  338195 certs.go:257] generating profile certs ...
	I1219 03:05:06.880036  338195 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/client.key
	I1219 03:05:06.880106  338195 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/apiserver.key.c5e580e0
	I1219 03:05:06.880162  338195 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/proxy-client.key
	I1219 03:05:06.880339  338195 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem (1338 bytes)
	W1219 03:05:06.880392  338195 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536_empty.pem, impossibly tiny 0 bytes
	I1219 03:05:06.880408  338195 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:05:06.880444  338195 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:05:06.880486  338195 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:05:06.880524  338195 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 03:05:06.880587  338195 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:05:06.881384  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:05:06.918067  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:05:06.940891  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:05:06.961602  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 03:05:06.989284  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1219 03:05:07.012446  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:05:07.032887  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:05:07.052627  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:05:07.073776  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:05:07.094304  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem --> /usr/share/ca-certificates/8536.pem (1338 bytes)
	I1219 03:05:07.113571  338195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /usr/share/ca-certificates/85362.pem (1708 bytes)
	I1219 03:05:07.132783  338195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:05:07.150244  338195 ssh_runner.go:195] Run: openssl version
	I1219 03:05:07.159577  338195 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:05:07.168651  338195 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:05:07.178541  338195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:05:07.183698  338195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:05:07.183792  338195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:05:07.226693  338195 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:05:07.234922  338195 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8536.pem
	I1219 03:05:07.243549  338195 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8536.pem /etc/ssl/certs/8536.pem
	I1219 03:05:07.252924  338195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8536.pem
	I1219 03:05:07.257881  338195 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:33 /usr/share/ca-certificates/8536.pem
	I1219 03:05:07.257959  338195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8536.pem
	I1219 03:05:07.299907  338195 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:05:07.307862  338195 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85362.pem
	I1219 03:05:07.319534  338195 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85362.pem /etc/ssl/certs/85362.pem
	I1219 03:05:07.328295  338195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85362.pem
	I1219 03:05:07.332255  338195 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:33 /usr/share/ca-certificates/85362.pem
	I1219 03:05:07.332322  338195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85362.pem
	I1219 03:05:07.369339  338195 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:05:07.377278  338195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:05:07.381456  338195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:05:07.430152  338195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:05:07.503045  338195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:05:07.580955  338195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:05:07.642970  338195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:05:07.698653  338195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:05:07.757686  338195 kubeadm.go:401] StartCluster: {Name:old-k8s-version-433330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-433330 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:07.757838  338195 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:05:07.757917  338195 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:05:07.795715  338195 cri.go:92] found id: "ba54120ef227f793b4872b17bf830934ba52ead424688a987fd2abd98fa2cc0e"
	I1219 03:05:07.795786  338195 cri.go:92] found id: "dca7ec4a11ad9581ca73f7dc0a44569b981b375142761b6af7e7d3724cdbe386"
	I1219 03:05:07.795797  338195 cri.go:92] found id: "6764bc2ee8b6d353f059a22bdc0c53e92e10b6672c29a9f618bd8d9812741d3b"
	I1219 03:05:07.795803  338195 cri.go:92] found id: "e80d5d62bfdccfdfe3b1ea4a0e9ada6c25b66f5f5a6a80697856ea062adbe100"
	I1219 03:05:07.795808  338195 cri.go:92] found id: ""
	I1219 03:05:07.795857  338195 ssh_runner.go:195] Run: sudo runc list -f json
	W1219 03:05:07.810766  338195 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:05:07Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:05:07.810833  338195 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:05:07.821138  338195 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:05:07.821160  338195 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:05:07.821214  338195 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:05:07.830813  338195 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:05:07.831943  338195 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-433330" does not appear in /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:07.832576  338195 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-4987/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-433330" cluster setting kubeconfig missing "old-k8s-version-433330" context setting]
	I1219 03:05:07.833610  338195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:07.835975  338195 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:05:07.846413  338195 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1219 03:05:07.846449  338195 kubeadm.go:602] duration metric: took 25.282312ms to restartPrimaryControlPlane
	I1219 03:05:07.846460  338195 kubeadm.go:403] duration metric: took 88.786269ms to StartCluster
	I1219 03:05:07.846477  338195 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:07.846534  338195 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:07.848255  338195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:07.848542  338195 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:05:07.848615  338195 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:05:07.848736  338195 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-433330"
	I1219 03:05:07.848763  338195 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-433330"
	W1219 03:05:07.848771  338195 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:05:07.848800  338195 host.go:66] Checking if "old-k8s-version-433330" exists ...
	I1219 03:05:07.848809  338195 config.go:182] Loaded profile config "old-k8s-version-433330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1219 03:05:07.848842  338195 addons.go:70] Setting dashboard=true in profile "old-k8s-version-433330"
	I1219 03:05:07.848859  338195 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-433330"
	I1219 03:05:07.848867  338195 addons.go:239] Setting addon dashboard=true in "old-k8s-version-433330"
	I1219 03:05:07.848874  338195 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-433330"
	W1219 03:05:07.848876  338195 addons.go:248] addon dashboard should already be in state true
	I1219 03:05:07.848915  338195 host.go:66] Checking if "old-k8s-version-433330" exists ...
	I1219 03:05:07.849165  338195 cli_runner.go:164] Run: docker container inspect old-k8s-version-433330 --format={{.State.Status}}
	I1219 03:05:07.849304  338195 cli_runner.go:164] Run: docker container inspect old-k8s-version-433330 --format={{.State.Status}}
	I1219 03:05:07.849396  338195 cli_runner.go:164] Run: docker container inspect old-k8s-version-433330 --format={{.State.Status}}
	I1219 03:05:07.855528  338195 out.go:179] * Verifying Kubernetes components...
	I1219 03:05:07.856889  338195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:07.879295  338195 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:07.879381  338195 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:05:07.879435  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:07.879640  338195 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-433330"
	W1219 03:05:07.879684  338195 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:05:07.879737  338195 host.go:66] Checking if "old-k8s-version-433330" exists ...
	I1219 03:05:07.880382  338195 cli_runner.go:164] Run: docker container inspect old-k8s-version-433330 --format={{.State.Status}}
	I1219 03:05:07.881862  338195 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:05:07.189594  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:07.690351  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:08.189479  332512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:05:08.324522  332512 kubeadm.go:1114] duration metric: took 4.734448628s to wait for elevateKubeSystemPrivileges
	I1219 03:05:08.324553  332512 kubeadm.go:403] duration metric: took 16.78586446s to StartCluster
	I1219 03:05:08.324572  332512 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:08.324658  332512 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:08.326911  332512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:08.327319  332512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1219 03:05:08.327490  332512 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:05:08.328010  332512 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:08.327791  332512 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:05:08.328121  332512 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-717222"
	I1219 03:05:08.328138  332512 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-717222"
	I1219 03:05:08.328162  332512 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:05:08.328642  332512 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:08.328946  332512 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-717222"
	I1219 03:05:08.328966  332512 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-717222"
	I1219 03:05:08.329284  332512 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:08.329845  332512 out.go:179] * Verifying Kubernetes components...
	I1219 03:05:08.330984  332512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:08.361776  332512 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-717222"
	I1219 03:05:08.361817  332512 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:05:08.362269  332512 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:08.365940  332512 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:05:07.070863  338816 cli_runner.go:164] Run: docker network inspect no-preload-278042 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:05:07.090826  338816 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1219 03:05:07.095208  338816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:07.107191  338816 kubeadm.go:884] updating cluster {Name:no-preload-278042 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-278042 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:05:07.107316  338816 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:05:07.107349  338816 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:07.143426  338816 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:07.143452  338816 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:05:07.143461  338816 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-rc.1 crio true true} ...
	I1219 03:05:07.143566  338816 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-278042 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-278042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:05:07.143673  338816 ssh_runner.go:195] Run: crio config
	I1219 03:05:07.204551  338816 cni.go:84] Creating CNI manager for ""
	I1219 03:05:07.204574  338816 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:07.204593  338816 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:05:07.204618  338816 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-278042 NodeName:no-preload-278042 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:05:07.204775  338816 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-278042"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:05:07.204856  338816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1219 03:05:07.213896  338816 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:05:07.213983  338816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:05:07.222161  338816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1219 03:05:07.237467  338816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1219 03:05:07.253225  338816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1219 03:05:07.267954  338816 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:05:07.271869  338816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:07.282665  338816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:07.368635  338816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:07.395859  338816 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042 for IP: 192.168.103.2
	I1219 03:05:07.395885  338816 certs.go:195] generating shared ca certs ...
	I1219 03:05:07.395910  338816 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:07.396055  338816 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 03:05:07.396125  338816 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 03:05:07.396145  338816 certs.go:257] generating profile certs ...
	I1219 03:05:07.396242  338816 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/client.key
	I1219 03:05:07.396319  338816 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/apiserver.key.225a496e
	I1219 03:05:07.396365  338816 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/proxy-client.key
	I1219 03:05:07.396499  338816 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem (1338 bytes)
	W1219 03:05:07.396531  338816 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536_empty.pem, impossibly tiny 0 bytes
	I1219 03:05:07.396541  338816 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:05:07.396565  338816 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:05:07.396590  338816 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:05:07.396612  338816 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 03:05:07.396653  338816 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:05:07.397248  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:05:07.424809  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:05:07.450904  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:05:07.484112  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 03:05:07.532227  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1219 03:05:07.568778  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:05:07.597233  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:05:07.633283  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 03:05:07.663320  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:05:07.687923  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem --> /usr/share/ca-certificates/8536.pem (1338 bytes)
	I1219 03:05:07.713957  338816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /usr/share/ca-certificates/85362.pem (1708 bytes)
	I1219 03:05:07.735390  338816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:05:07.754651  338816 ssh_runner.go:195] Run: openssl version
	I1219 03:05:07.761976  338816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:05:07.770425  338816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:05:07.780764  338816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:05:07.785927  338816 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:05:07.785982  338816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:05:07.835166  338816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:05:07.846470  338816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8536.pem
	I1219 03:05:07.860237  338816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8536.pem /etc/ssl/certs/8536.pem
	I1219 03:05:07.871696  338816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8536.pem
	I1219 03:05:07.877191  338816 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:33 /usr/share/ca-certificates/8536.pem
	I1219 03:05:07.877254  338816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8536.pem
	I1219 03:05:07.946235  338816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:05:07.957618  338816 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85362.pem
	I1219 03:05:07.968122  338816 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85362.pem /etc/ssl/certs/85362.pem
	I1219 03:05:07.979583  338816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85362.pem
	I1219 03:05:07.986265  338816 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:33 /usr/share/ca-certificates/85362.pem
	I1219 03:05:07.986401  338816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85362.pem
	I1219 03:05:08.050665  338816 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:05:08.061489  338816 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:05:08.067348  338816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:05:08.134141  338816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:05:08.201735  338816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:05:08.275726  338816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:05:08.344352  338816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:05:08.409074  338816 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:05:08.474374  338816 kubeadm.go:401] StartCluster: {Name:no-preload-278042 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-278042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:08.474489  338816 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:05:08.474554  338816 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:05:08.528613  338816 cri.go:92] found id: "5f148a7e487d8dd3e55b516e027c5d508a26bdfe82dca079a9e980052c56ba2a"
	I1219 03:05:08.528639  338816 cri.go:92] found id: "001407ac1b90920309b7c79710e785dd82d392c6a8a43c444682d0837efd8cae"
	I1219 03:05:08.528646  338816 cri.go:92] found id: "973ccccab25764a0f9dd88e6a5d1d5565060554451f1dcc3dd0e68a9aed4c9f2"
	I1219 03:05:08.528651  338816 cri.go:92] found id: "821b9cbc72eb687248584e5b24c8fee86a84d2d127bb4360eeb9e0bc0f2c67ec"
	I1219 03:05:08.528656  338816 cri.go:92] found id: ""
	I1219 03:05:08.528698  338816 ssh_runner.go:195] Run: sudo runc list -f json
	W1219 03:05:08.554108  338816 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:05:08Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:05:08.554182  338816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:05:08.568145  338816 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:05:08.568218  338816 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:05:08.568321  338816 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:05:08.581222  338816 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:05:08.582541  338816 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-278042" does not appear in /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:08.583742  338816 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-4987/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-278042" cluster setting kubeconfig missing "no-preload-278042" context setting]
	I1219 03:05:08.585602  338816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:08.588255  338816 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:05:08.604210  338816 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1219 03:05:08.604347  338816 kubeadm.go:602] duration metric: took 36.110072ms to restartPrimaryControlPlane
	I1219 03:05:08.604362  338816 kubeadm.go:403] duration metric: took 129.998216ms to StartCluster
	I1219 03:05:08.604495  338816 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:08.604622  338816 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:08.607318  338816 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:05:08.607866  338816 config.go:182] Loaded profile config "no-preload-278042": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:05:08.607997  338816 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:05:08.608095  338816 addons.go:70] Setting storage-provisioner=true in profile "no-preload-278042"
	I1219 03:05:08.608111  338816 addons.go:239] Setting addon storage-provisioner=true in "no-preload-278042"
	W1219 03:05:08.608119  338816 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:05:08.608148  338816 host.go:66] Checking if "no-preload-278042" exists ...
	I1219 03:05:08.608655  338816 cli_runner.go:164] Run: docker container inspect no-preload-278042 --format={{.State.Status}}
	I1219 03:05:08.608956  338816 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:05:08.609157  338816 addons.go:70] Setting dashboard=true in profile "no-preload-278042"
	I1219 03:05:08.609187  338816 addons.go:239] Setting addon dashboard=true in "no-preload-278042"
	W1219 03:05:08.609224  338816 addons.go:248] addon dashboard should already be in state true
	I1219 03:05:08.609250  338816 host.go:66] Checking if "no-preload-278042" exists ...
	I1219 03:05:08.609963  338816 cli_runner.go:164] Run: docker container inspect no-preload-278042 --format={{.State.Status}}
	I1219 03:05:08.610134  338816 addons.go:70] Setting default-storageclass=true in profile "no-preload-278042"
	I1219 03:05:08.610171  338816 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-278042"
	I1219 03:05:08.610441  338816 cli_runner.go:164] Run: docker container inspect no-preload-278042 --format={{.State.Status}}
	I1219 03:05:08.612260  338816 out.go:179] * Verifying Kubernetes components...
	I1219 03:05:08.613459  338816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:08.650159  338816 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:08.650253  338816 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:05:08.650380  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:08.650932  338816 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:05:08.367066  332512 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:08.367116  332512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:05:08.367199  332512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:08.400056  332512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:08.405801  332512 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:08.405827  332512 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:05:08.405883  332512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:08.442051  332512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:08.495984  332512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1219 03:05:08.576504  332512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:08.638126  332512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:08.714925  332512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:08.903440  332512 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1219 03:05:08.905932  332512 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:05:09.140434  332512 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1219 03:05:07.883077  338195 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:07.883135  338195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:05:07.883213  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:07.910300  338195 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:07.910324  338195 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:05:07.910382  338195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:05:07.912917  338195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/old-k8s-version-433330/id_rsa Username:docker}
	I1219 03:05:07.913769  338195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/old-k8s-version-433330/id_rsa Username:docker}
	I1219 03:05:07.945627  338195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/old-k8s-version-433330/id_rsa Username:docker}
	I1219 03:05:08.042651  338195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:08.066541  338195 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-433330" to be "Ready" ...
	I1219 03:05:08.073928  338195 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:05:08.074110  338195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:08.078080  338195 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:05:08.097406  338195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:08.652002  338816 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:08.652056  338816 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:05:08.652162  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:08.660192  338816 addons.go:239] Setting addon default-storageclass=true in "no-preload-278042"
	W1219 03:05:08.660219  338816 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:05:08.660249  338816 host.go:66] Checking if "no-preload-278042" exists ...
	I1219 03:05:08.660714  338816 cli_runner.go:164] Run: docker container inspect no-preload-278042 --format={{.State.Status}}
	I1219 03:05:08.693862  338816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/no-preload-278042/id_rsa Username:docker}
	I1219 03:05:08.694594  338816 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:08.694612  338816 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:05:08.694662  338816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:05:08.697825  338816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/no-preload-278042/id_rsa Username:docker}
	I1219 03:05:08.733299  338816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/no-preload-278042/id_rsa Username:docker}
	I1219 03:05:08.842079  338816 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:08.844314  338816 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:05:08.849998  338816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:08.867204  338816 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:05:08.867451  338816 node_ready.go:35] waiting up to 6m0s for node "no-preload-278042" to be "Ready" ...
	I1219 03:05:08.880152  338816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:10.315153  338816 node_ready.go:49] node "no-preload-278042" is "Ready"
	I1219 03:05:10.315193  338816 node_ready.go:38] duration metric: took 1.447691115s for node "no-preload-278042" to be "Ready" ...
	I1219 03:05:10.315209  338816 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:10.315268  338816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:05:09.141786  332512 addons.go:546] duration metric: took 813.994932ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1219 03:05:09.410032  332512 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-717222" context rescaled to 1 replicas
	W1219 03:05:10.909328  332512 node_ready.go:57] node "default-k8s-diff-port-717222" has "Ready":"False" status (will retry)
	W1219 03:05:09.148113  330835 node_ready.go:57] node "embed-certs-805185" has "Ready":"False" status (will retry)
	W1219 03:05:11.148219  330835 node_ready.go:57] node "embed-certs-805185" has "Ready":"False" status (will retry)
	I1219 03:05:10.594634  338195 node_ready.go:49] node "old-k8s-version-433330" is "Ready"
	I1219 03:05:10.594676  338195 node_ready.go:38] duration metric: took 2.528095005s for node "old-k8s-version-433330" to be "Ready" ...
	I1219 03:05:10.594694  338195 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:10.594785  338195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:05:11.438208  338195 ssh_runner.go:235] Completed: test -f /usr/local/bin/helm: (3.3600947s)
	I1219 03:05:11.438243  338195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.364104191s)
	I1219 03:05:11.438299  338195 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:05:11.438319  338195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.340863748s)
	I1219 03:05:11.438418  338195 api_server.go:72] duration metric: took 3.589841435s to wait for apiserver process to appear ...
	I1219 03:05:11.438433  338195 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:11.438450  338195 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:05:11.445459  338195 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1219 03:05:11.445494  338195 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1219 03:05:11.939501  338195 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:05:11.944271  338195 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1219 03:05:11.946314  338195 api_server.go:141] control plane version: v1.28.0
	I1219 03:05:11.946344  338195 api_server.go:131] duration metric: took 507.904118ms to wait for apiserver health ...
	I1219 03:05:11.946355  338195 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:11.950741  338195 system_pods.go:59] 8 kube-system pods found
	I1219 03:05:11.950809  338195 system_pods.go:61] "coredns-5dd5756b68-vp79f" [9fcc07be-0cde-4964-af90-fb09218728e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:11.950821  338195 system_pods.go:61] "etcd-old-k8s-version-433330" [e7e65e56-a92a-43ec-8dda-93b521937bef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:11.950840  338195 system_pods.go:61] "kindnet-hm2sz" [c6df6f60-75af-46bf-9a07-9644745d5f72] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:11.950853  338195 system_pods.go:61] "kube-apiserver-old-k8s-version-433330" [50ae6467-8e2c-41f5-9c9c-eda6741c41f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:11.950861  338195 system_pods.go:61] "kube-controller-manager-old-k8s-version-433330" [f680d80e-8a0e-486d-8e26-91e124efe760] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:11.950873  338195 system_pods.go:61] "kube-proxy-wdrk8" [b2738e98-0383-41b2-b183-a13a2a915c6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:11.950881  338195 system_pods.go:61] "kube-scheduler-old-k8s-version-433330" [465a3df8-5c4b-44d0-aaa1-b4b1e35e0d67] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:11.950890  338195 system_pods.go:61] "storage-provisioner" [0fba7aca-106d-40c8-8651-91680e4fedcc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:11.950898  338195 system_pods.go:74] duration metric: took 4.535468ms to wait for pod list to return data ...
	I1219 03:05:11.950910  338195 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:11.956062  338195 default_sa.go:45] found service account: "default"
	I1219 03:05:11.956091  338195 default_sa.go:55] duration metric: took 5.174812ms for default service account to be created ...
	I1219 03:05:11.956104  338195 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:11.959815  338195 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:11.959849  338195 system_pods.go:89] "coredns-5dd5756b68-vp79f" [9fcc07be-0cde-4964-af90-fb09218728e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:11.959863  338195 system_pods.go:89] "etcd-old-k8s-version-433330" [e7e65e56-a92a-43ec-8dda-93b521937bef] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:11.959876  338195 system_pods.go:89] "kindnet-hm2sz" [c6df6f60-75af-46bf-9a07-9644745d5f72] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:11.959886  338195 system_pods.go:89] "kube-apiserver-old-k8s-version-433330" [50ae6467-8e2c-41f5-9c9c-eda6741c41f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:11.959896  338195 system_pods.go:89] "kube-controller-manager-old-k8s-version-433330" [f680d80e-8a0e-486d-8e26-91e124efe760] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:11.959909  338195 system_pods.go:89] "kube-proxy-wdrk8" [b2738e98-0383-41b2-b183-a13a2a915c6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:11.959919  338195 system_pods.go:89] "kube-scheduler-old-k8s-version-433330" [465a3df8-5c4b-44d0-aaa1-b4b1e35e0d67] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:11.959928  338195 system_pods.go:89] "storage-provisioner" [0fba7aca-106d-40c8-8651-91680e4fedcc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:11.959939  338195 system_pods.go:126] duration metric: took 3.828183ms to wait for k8s-apps to be running ...
	I1219 03:05:11.959951  338195 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:11.960013  338195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:12.440481  338195 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.002148257s)
	I1219 03:05:12.440525  338195 system_svc.go:56] duration metric: took 480.567587ms WaitForService to wait for kubelet
	I1219 03:05:12.440544  338195 kubeadm.go:587] duration metric: took 4.591969486s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:12.440564  338195 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:12.440569  338195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:05:12.443630  338195 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:12.443661  338195 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:12.443680  338195 node_conditions.go:105] duration metric: took 3.110203ms to run NodePressure ...
	I1219 03:05:12.443694  338195 start.go:242] waiting for startup goroutines ...
	I1219 03:05:11.019356  338816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.169299179s)
	I1219 03:05:11.019382  338816 ssh_runner.go:235] Completed: test -f /usr/local/bin/helm: (2.152150538s)
	I1219 03:05:11.019432  338816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.139259984s)
	I1219 03:05:11.019463  338816 api_server.go:72] duration metric: took 2.410472354s to wait for apiserver process to appear ...
	I1219 03:05:11.019474  338816 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:11.019499  338816 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1219 03:05:11.019872  338816 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:05:11.025534  338816 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:11.025568  338816 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:11.520552  338816 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1219 03:05:11.525350  338816 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1219 03:05:11.526441  338816 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 03:05:11.526466  338816 api_server.go:131] duration metric: took 506.986603ms to wait for apiserver health ...
	I1219 03:05:11.526475  338816 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:11.530506  338816 system_pods.go:59] 8 kube-system pods found
	I1219 03:05:11.530551  338816 system_pods.go:61] "coredns-7d764666f9-vj7lm" [6bb897eb-e856-4660-aa9c-3fac6b610d38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:11.530565  338816 system_pods.go:61] "etcd-no-preload-278042" [a9dcae0a-af63-4eb2-a240-c68ab749763e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:11.530583  338816 system_pods.go:61] "kindnet-xrp2s" [b0f7317a-c504-4597-ba97-3d50ee2927c1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:11.530593  338816 system_pods.go:61] "kube-apiserver-no-preload-278042" [ac835fd3-def8-49e8-bee3-b76ee0667ca6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:11.530608  338816 system_pods.go:61] "kube-controller-manager-no-preload-278042" [0938d60f-d3e9-457e-ac68-8cba5d210c11] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:11.530618  338816 system_pods.go:61] "kube-proxy-g2gm4" [4cb3af28-e9b4-45b6-80d4-fe8bdadd6911] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:11.530632  338816 system_pods.go:61] "kube-scheduler-no-preload-278042" [bb8f444d-8eae-4359-917f-04165ccecf47] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:11.530643  338816 system_pods.go:61] "storage-provisioner" [7114449c-463d-44ef-955c-5dda46333a32] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:11.530651  338816 system_pods.go:74] duration metric: took 4.169725ms to wait for pod list to return data ...
	I1219 03:05:11.530660  338816 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:11.533311  338816 default_sa.go:45] found service account: "default"
	I1219 03:05:11.533333  338816 default_sa.go:55] duration metric: took 2.662455ms for default service account to be created ...
	I1219 03:05:11.533342  338816 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:11.536223  338816 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:11.536255  338816 system_pods.go:89] "coredns-7d764666f9-vj7lm" [6bb897eb-e856-4660-aa9c-3fac6b610d38] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:11.536267  338816 system_pods.go:89] "etcd-no-preload-278042" [a9dcae0a-af63-4eb2-a240-c68ab749763e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:11.536276  338816 system_pods.go:89] "kindnet-xrp2s" [b0f7317a-c504-4597-ba97-3d50ee2927c1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:11.536291  338816 system_pods.go:89] "kube-apiserver-no-preload-278042" [ac835fd3-def8-49e8-bee3-b76ee0667ca6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:11.536298  338816 system_pods.go:89] "kube-controller-manager-no-preload-278042" [0938d60f-d3e9-457e-ac68-8cba5d210c11] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:11.536306  338816 system_pods.go:89] "kube-proxy-g2gm4" [4cb3af28-e9b4-45b6-80d4-fe8bdadd6911] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:11.536313  338816 system_pods.go:89] "kube-scheduler-no-preload-278042" [bb8f444d-8eae-4359-917f-04165ccecf47] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:11.536320  338816 system_pods.go:89] "storage-provisioner" [7114449c-463d-44ef-955c-5dda46333a32] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:11.536329  338816 system_pods.go:126] duration metric: took 2.980203ms to wait for k8s-apps to be running ...
	I1219 03:05:11.536337  338816 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:11.536385  338816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:12.894015  338816 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.874104985s)
	I1219 03:05:12.894083  338816 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.357672898s)
	I1219 03:05:12.894105  338816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:05:12.894115  338816 system_svc.go:56] duration metric: took 1.357775052s WaitForService to wait for kubelet
	I1219 03:05:12.894126  338816 kubeadm.go:587] duration metric: took 4.285135318s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:12.894151  338816 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:12.897676  338816 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:12.897720  338816 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:12.897737  338816 node_conditions.go:105] duration metric: took 3.579647ms to run NodePressure ...
	I1219 03:05:12.897752  338816 start.go:242] waiting for startup goroutines ...
	I1219 03:05:15.854695  338816 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (2.960550336s)
	I1219 03:05:15.854808  338816 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:16.045446  338816 addons.go:500] Verifying addon dashboard=true in "no-preload-278042"
	I1219 03:05:16.045845  338816 cli_runner.go:164] Run: docker container inspect no-preload-278042 --format={{.State.Status}}
	I1219 03:05:16.070451  338816 out.go:179] * Verifying dashboard addon...
	I1219 03:05:15.413795  338195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (2.973186291s)
	I1219 03:05:15.413880  338195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:16.135613  338195 addons.go:500] Verifying addon dashboard=true in "old-k8s-version-433330"
	I1219 03:05:16.135982  338195 cli_runner.go:164] Run: docker container inspect old-k8s-version-433330 --format={{.State.Status}}
	I1219 03:05:16.156407  338195 out.go:179] * Verifying dashboard addon...
	W1219 03:05:12.910131  332512 node_ready.go:57] node "default-k8s-diff-port-717222" has "Ready":"False" status (will retry)
	W1219 03:05:15.410090  332512 node_ready.go:57] node "default-k8s-diff-port-717222" has "Ready":"False" status (will retry)
	W1219 03:05:13.647770  330835 node_ready.go:57] node "embed-certs-805185" has "Ready":"False" status (will retry)
	I1219 03:05:15.647507  330835 node_ready.go:49] node "embed-certs-805185" is "Ready"
	I1219 03:05:15.647547  330835 node_ready.go:38] duration metric: took 12.50348426s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:15.647565  330835 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:15.647622  330835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:05:15.665212  330835 api_server.go:72] duration metric: took 12.794611643s to wait for apiserver process to appear ...
	I1219 03:05:15.665242  330835 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:15.665272  330835 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:15.670942  330835 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1219 03:05:15.672243  330835 api_server.go:141] control plane version: v1.34.3
	I1219 03:05:15.672276  330835 api_server.go:131] duration metric: took 7.026021ms to wait for apiserver health ...
	I1219 03:05:15.672288  330835 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:15.676548  330835 system_pods.go:59] 8 kube-system pods found
	I1219 03:05:15.676588  330835 system_pods.go:61] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:15.676597  330835 system_pods.go:61] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running
	I1219 03:05:15.676606  330835 system_pods.go:61] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running
	I1219 03:05:15.676612  330835 system_pods.go:61] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running
	I1219 03:05:15.676621  330835 system_pods.go:61] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running
	I1219 03:05:15.676625  330835 system_pods.go:61] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running
	I1219 03:05:15.676635  330835 system_pods.go:61] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running
	I1219 03:05:15.676643  330835 system_pods.go:61] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:15.676655  330835 system_pods.go:74] duration metric: took 4.359785ms to wait for pod list to return data ...
	I1219 03:05:15.676667  330835 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:15.679531  330835 default_sa.go:45] found service account: "default"
	I1219 03:05:15.679562  330835 default_sa.go:55] duration metric: took 2.88404ms for default service account to be created ...
	I1219 03:05:15.679574  330835 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:15.686023  330835 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:15.686069  330835 system_pods.go:89] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:15.686080  330835 system_pods.go:89] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running
	I1219 03:05:15.686092  330835 system_pods.go:89] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running
	I1219 03:05:15.686098  330835 system_pods.go:89] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running
	I1219 03:05:15.686105  330835 system_pods.go:89] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running
	I1219 03:05:15.686110  330835 system_pods.go:89] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running
	I1219 03:05:15.686115  330835 system_pods.go:89] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running
	I1219 03:05:15.686123  330835 system_pods.go:89] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:15.686155  330835 retry.go:31] will retry after 250.846843ms: missing components: kube-dns
	I1219 03:05:15.942684  330835 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:15.942752  330835 system_pods.go:89] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:15.942761  330835 system_pods.go:89] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running
	I1219 03:05:15.942770  330835 system_pods.go:89] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running
	I1219 03:05:15.942776  330835 system_pods.go:89] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running
	I1219 03:05:15.942783  330835 system_pods.go:89] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running
	I1219 03:05:15.942788  330835 system_pods.go:89] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running
	I1219 03:05:15.942793  330835 system_pods.go:89] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running
	I1219 03:05:15.942802  330835 system_pods.go:89] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:15.942822  330835 retry.go:31] will retry after 299.918101ms: missing components: kube-dns
	I1219 03:05:16.246247  330835 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:16.246283  330835 system_pods.go:89] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running
	I1219 03:05:16.246293  330835 system_pods.go:89] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running
	I1219 03:05:16.246299  330835 system_pods.go:89] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running
	I1219 03:05:16.246305  330835 system_pods.go:89] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running
	I1219 03:05:16.246312  330835 system_pods.go:89] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running
	I1219 03:05:16.246317  330835 system_pods.go:89] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running
	I1219 03:05:16.246322  330835 system_pods.go:89] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running
	I1219 03:05:16.246328  330835 system_pods.go:89] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running
	I1219 03:05:16.246339  330835 system_pods.go:126] duration metric: took 566.755252ms to wait for k8s-apps to be running ...
	I1219 03:05:16.246352  330835 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:16.246396  330835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:16.261160  330835 system_svc.go:56] duration metric: took 14.796481ms WaitForService to wait for kubelet
	I1219 03:05:16.261200  330835 kubeadm.go:587] duration metric: took 13.390608365s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:16.261220  330835 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:16.263958  330835 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:16.263983  330835 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:16.263996  330835 node_conditions.go:105] duration metric: took 2.770433ms to run NodePressure ...
	I1219 03:05:16.264008  330835 start.go:242] waiting for startup goroutines ...
	I1219 03:05:16.264017  330835 start.go:247] waiting for cluster config update ...
	I1219 03:05:16.264029  330835 start.go:256] writing updated cluster config ...
	I1219 03:05:16.264312  330835 ssh_runner.go:195] Run: rm -f paused
	I1219 03:05:16.268532  330835 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:05:16.347111  330835 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:16.352357  330835 pod_ready.go:94] pod "coredns-66bc5c9577-8gphx" is "Ready"
	I1219 03:05:16.352382  330835 pod_ready.go:86] duration metric: took 5.238735ms for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:16.354520  330835 pod_ready.go:83] waiting for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:16.358398  330835 pod_ready.go:94] pod "etcd-embed-certs-805185" is "Ready"
	I1219 03:05:16.358420  330835 pod_ready.go:86] duration metric: took 3.879167ms for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:16.360455  330835 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:16.364149  330835 pod_ready.go:94] pod "kube-apiserver-embed-certs-805185" is "Ready"
	I1219 03:05:16.364168  330835 pod_ready.go:86] duration metric: took 3.693544ms for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:16.365862  330835 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:16.672663  330835 pod_ready.go:94] pod "kube-controller-manager-embed-certs-805185" is "Ready"
	I1219 03:05:16.672697  330835 pod_ready.go:86] duration metric: took 306.817483ms for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:16.874110  330835 pod_ready.go:83] waiting for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:17.272998  330835 pod_ready.go:94] pod "kube-proxy-p8pqg" is "Ready"
	I1219 03:05:17.273027  330835 pod_ready.go:86] duration metric: took 398.889124ms for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:17.473660  330835 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:17.873324  330835 pod_ready.go:94] pod "kube-scheduler-embed-certs-805185" is "Ready"
	I1219 03:05:17.873358  330835 pod_ready.go:86] duration metric: took 399.668419ms for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:17.873373  330835 pod_ready.go:40] duration metric: took 1.604806437s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:05:17.928904  330835 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:05:17.931254  330835 out.go:179] * Done! kubectl is now configured to use "embed-certs-805185" cluster and "default" namespace by default
	I1219 03:05:16.159862  338195 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:05:16.162760  338195 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:05:16.072616  338816 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:05:16.075989  338816 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:05:16.076006  338816 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:16.577520  338816 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.075642  338816 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:17.576941  338816 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.077640  338816 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:18.576828  338816 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.078319  338816 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:19.576534  338816 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:20.076165  338816 kapi.go:107] duration metric: took 4.003544668s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:05:20.077978  338816 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-278042 addons enable metrics-server
	
	I1219 03:05:20.080004  338816 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1219 03:05:20.085253  338816 addons.go:546] duration metric: took 11.477254429s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:05:20.085362  338816 start.go:247] waiting for cluster config update ...
	I1219 03:05:20.085378  338816 start.go:256] writing updated cluster config ...
	I1219 03:05:20.085793  338816 ssh_runner.go:195] Run: rm -f paused
	I1219 03:05:20.093277  338816 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:05:20.097833  338816 pod_ready.go:83] waiting for pod "coredns-7d764666f9-vj7lm" in "kube-system" namespace to be "Ready" or be gone ...
	W1219 03:05:17.909999  332512 node_ready.go:57] node "default-k8s-diff-port-717222" has "Ready":"False" status (will retry)
	W1219 03:05:19.910131  332512 node_ready.go:57] node "default-k8s-diff-port-717222" has "Ready":"False" status (will retry)
	I1219 03:05:21.910035  332512 node_ready.go:49] node "default-k8s-diff-port-717222" is "Ready"
	I1219 03:05:21.910077  332512 node_ready.go:38] duration metric: took 13.004087015s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:05:21.910093  332512 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:21.910153  332512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:05:21.926211  332512 api_server.go:72] duration metric: took 13.598689266s to wait for apiserver process to appear ...
	I1219 03:05:21.926238  332512 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:21.926261  332512 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:05:21.931204  332512 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1219 03:05:21.932318  332512 api_server.go:141] control plane version: v1.34.3
	I1219 03:05:21.932346  332512 api_server.go:131] duration metric: took 6.100419ms to wait for apiserver health ...
	I1219 03:05:21.932357  332512 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:21.936179  332512 system_pods.go:59] 8 kube-system pods found
	I1219 03:05:21.936208  332512 system_pods.go:61] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:21.936214  332512 system_pods.go:61] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running
	I1219 03:05:21.936219  332512 system_pods.go:61] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:05:21.936222  332512 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running
	I1219 03:05:21.936226  332512 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running
	I1219 03:05:21.936230  332512 system_pods.go:61] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:05:21.936234  332512 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running
	I1219 03:05:21.936242  332512 system_pods.go:61] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:21.936251  332512 system_pods.go:74] duration metric: took 3.886862ms to wait for pod list to return data ...
	I1219 03:05:21.936263  332512 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:21.938733  332512 default_sa.go:45] found service account: "default"
	I1219 03:05:21.938754  332512 default_sa.go:55] duration metric: took 2.48343ms for default service account to be created ...
	I1219 03:05:21.938762  332512 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:21.941900  332512 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:21.941943  332512 system_pods.go:89] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:21.941953  332512 system_pods.go:89] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running
	I1219 03:05:21.941963  332512 system_pods.go:89] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:05:21.941991  332512 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running
	I1219 03:05:21.941999  332512 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running
	I1219 03:05:21.942010  332512 system_pods.go:89] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:05:21.942017  332512 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running
	I1219 03:05:21.942038  332512 system_pods.go:89] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:21.942077  332512 retry.go:31] will retry after 190.375881ms: missing components: kube-dns
	I1219 03:05:22.137040  332512 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:22.137075  332512 system_pods.go:89] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:22.137082  332512 system_pods.go:89] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running
	I1219 03:05:22.137091  332512 system_pods.go:89] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:05:22.137102  332512 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running
	I1219 03:05:22.137113  332512 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running
	I1219 03:05:22.137119  332512 system_pods.go:89] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:05:22.137125  332512 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running
	I1219 03:05:22.137133  332512 system_pods.go:89] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:22.137152  332512 retry.go:31] will retry after 271.345441ms: missing components: kube-dns
	I1219 03:05:22.413383  332512 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:22.413426  332512 system_pods.go:89] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:22.413449  332512 system_pods.go:89] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running
	I1219 03:05:22.413466  332512 system_pods.go:89] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:05:22.413473  332512 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running
	I1219 03:05:22.413480  332512 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running
	I1219 03:05:22.413488  332512 system_pods.go:89] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:05:22.413495  332512 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running
	I1219 03:05:22.413507  332512 system_pods.go:89] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:22.413530  332512 retry.go:31] will retry after 362.736045ms: missing components: kube-dns
	I1219 03:05:22.781610  332512 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:22.781658  332512 system_pods.go:89] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running
	I1219 03:05:22.781667  332512 system_pods.go:89] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running
	I1219 03:05:22.781674  332512 system_pods.go:89] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:05:22.781680  332512 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running
	I1219 03:05:22.781687  332512 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running
	I1219 03:05:22.781693  332512 system_pods.go:89] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:05:22.781698  332512 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running
	I1219 03:05:22.781732  332512 system_pods.go:89] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:05:22.781743  332512 system_pods.go:126] duration metric: took 842.974471ms to wait for k8s-apps to be running ...
	I1219 03:05:22.781758  332512 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:22.781811  332512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:22.799460  332512 system_svc.go:56] duration metric: took 17.692998ms WaitForService to wait for kubelet
	I1219 03:05:22.799488  332512 kubeadm.go:587] duration metric: took 14.471971429s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:22.799513  332512 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:22.802953  332512 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:22.802983  332512 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:22.803004  332512 node_conditions.go:105] duration metric: took 3.48447ms to run NodePressure ...
	I1219 03:05:22.803018  332512 start.go:242] waiting for startup goroutines ...
	I1219 03:05:22.803031  332512 start.go:247] waiting for cluster config update ...
	I1219 03:05:22.803045  332512 start.go:256] writing updated cluster config ...
	I1219 03:05:22.803366  332512 ssh_runner.go:195] Run: rm -f paused
	I1219 03:05:22.808138  332512 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:05:22.881856  332512 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.887118  332512 pod_ready.go:94] pod "coredns-66bc5c9577-dskxl" is "Ready"
	I1219 03:05:22.887149  332512 pod_ready.go:86] duration metric: took 5.261763ms for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.889574  332512 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.894034  332512 pod_ready.go:94] pod "etcd-default-k8s-diff-port-717222" is "Ready"
	I1219 03:05:22.894059  332512 pod_ready.go:86] duration metric: took 4.396584ms for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.896328  332512 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.900542  332512 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-717222" is "Ready"
	I1219 03:05:22.900567  332512 pod_ready.go:86] duration metric: took 4.218046ms for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:22.902641  332512 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:23.214058  332512 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-717222" is "Ready"
	I1219 03:05:23.214114  332512 pod_ready.go:86] duration metric: took 311.451444ms for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:23.560693  332512 pod_ready.go:83] waiting for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:23.813228  332512 pod_ready.go:94] pod "kube-proxy-mr7c8" is "Ready"
	I1219 03:05:23.813263  332512 pod_ready.go:86] duration metric: took 252.512477ms for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:24.013759  332512 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:24.413305  332512 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-717222" is "Ready"
	I1219 03:05:24.413337  332512 pod_ready.go:86] duration metric: took 399.543508ms for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:05:24.413351  332512 pod_ready.go:40] duration metric: took 1.605180295s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:05:24.471536  332512 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:05:24.475655  332512 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-717222" cluster and "default" namespace by default
	I1219 03:05:24.164758  338195 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:05:24.164785  338195 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:24.668000  338195 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:05:22.106320  338816 pod_ready.go:104] pod "coredns-7d764666f9-vj7lm" is not "Ready", error: <nil>
	W1219 03:05:24.607413  338816 pod_ready.go:104] pod "coredns-7d764666f9-vj7lm" is not "Ready", error: <nil>
	I1219 03:05:25.165804  338195 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:25.665541  338195 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.165392  338195 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:26.664670  338195 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.164330  338195 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:27.664906  338195 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:28.163947  338195 kapi.go:107] duration metric: took 12.004086792s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:05:28.165238  338195 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-433330 addons enable metrics-server
	
	I1219 03:05:28.167168  338195 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1219 03:05:28.168347  338195 addons.go:546] duration metric: took 20.319738224s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:05:28.168473  338195 start.go:247] waiting for cluster config update ...
	I1219 03:05:28.168502  338195 start.go:256] writing updated cluster config ...
	I1219 03:05:28.168817  338195 ssh_runner.go:195] Run: rm -f paused
	I1219 03:05:28.173198  338195 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:05:28.178512  338195 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vp79f" in "kube-system" namespace to be "Ready" or be gone ...
	W1219 03:05:27.104728  338816 pod_ready.go:104] pod "coredns-7d764666f9-vj7lm" is not "Ready", error: <nil>
	W1219 03:05:29.109693  338816 pod_ready.go:104] pod "coredns-7d764666f9-vj7lm" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Dec 19 03:05:22 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:22.063631912Z" level=info msg="Starting container: 1d70fbf699cd642ce0850af421c4233ad1e842981c4cd42f00a6a41a538412fc" id=7e0249ed-f1ba-43b1-825e-7492fe6a8ce3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:05:22 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:22.06605436Z" level=info msg="Started container" PID=1916 containerID=1d70fbf699cd642ce0850af421c4233ad1e842981c4cd42f00a6a41a538412fc description=kube-system/coredns-66bc5c9577-dskxl/coredns id=7e0249ed-f1ba-43b1-825e-7492fe6a8ce3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=75e3ef451c07d05ef52fe496dd474c834b1213b3e56c18efd8f86dc98f246f83
	Dec 19 03:05:25 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:25.074385872Z" level=info msg="Running pod sandbox: default/busybox/POD" id=06ef7c4e-7a54-4e86-9778-67e6127cfb8d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 19 03:05:25 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:25.074473494Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:25 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:25.083435709Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bebac018a63f5218f21f0aae49f35dd846cc510b090ec30b1eac6558cbedcea2 UID:a9f35053-e166-41af-99cf-2a293efdd88e NetNS:/var/run/netns/70c53176-7ac9-4e25-bff0-2802763b7dd4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000723670}] Aliases:map[]}"
	Dec 19 03:05:25 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:25.083520569Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 19 03:05:25 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:25.101688904Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bebac018a63f5218f21f0aae49f35dd846cc510b090ec30b1eac6558cbedcea2 UID:a9f35053-e166-41af-99cf-2a293efdd88e NetNS:/var/run/netns/70c53176-7ac9-4e25-bff0-2802763b7dd4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000723670}] Aliases:map[]}"
	Dec 19 03:05:25 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:25.102169073Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 19 03:05:25 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:25.103451816Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 19 03:05:25 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:25.106209847Z" level=info msg="Ran pod sandbox bebac018a63f5218f21f0aae49f35dd846cc510b090ec30b1eac6558cbedcea2 with infra container: default/busybox/POD" id=06ef7c4e-7a54-4e86-9778-67e6127cfb8d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 19 03:05:25 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:25.108014027Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=00e5fa6c-945f-4e7a-a6e8-40847aa2f606 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:25 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:25.108163761Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=00e5fa6c-945f-4e7a-a6e8-40847aa2f606 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:25 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:25.108220187Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=00e5fa6c-945f-4e7a-a6e8-40847aa2f606 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:25 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:25.109144352Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e7361e86-b713-41c7-9e6c-c9fd771fc1a8 name=/runtime.v1.ImageService/PullImage
	Dec 19 03:05:25 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:25.111206346Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 19 03:05:26 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:26.475151649Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=e7361e86-b713-41c7-9e6c-c9fd771fc1a8 name=/runtime.v1.ImageService/PullImage
	Dec 19 03:05:26 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:26.476104126Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=01a7c207-7695-4c95-8d8c-90cf385318d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:26 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:26.477776421Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=67152872-b0c5-42e9-943d-9cafdf69c0a5 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:26 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:26.4824211Z" level=info msg="Creating container: default/busybox/busybox" id=0fa0aed4-3ef0-4451-98fb-da26b4bb7cb3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:26 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:26.482573043Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:26 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:26.487400577Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:26 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:26.488013918Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:26 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:26.533154648Z" level=info msg="Created container 4afd721162283da20752358c4bd4b2a2f865afdd4cb23e4ee9d1cec8ac999196: default/busybox/busybox" id=0fa0aed4-3ef0-4451-98fb-da26b4bb7cb3 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:26 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:26.534062361Z" level=info msg="Starting container: 4afd721162283da20752358c4bd4b2a2f865afdd4cb23e4ee9d1cec8ac999196" id=b04c7363-6d77-4478-81d4-1d2edfbc9250 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:05:26 default-k8s-diff-port-717222 crio[781]: time="2025-12-19T03:05:26.536666302Z" level=info msg="Started container" PID=1989 containerID=4afd721162283da20752358c4bd4b2a2f865afdd4cb23e4ee9d1cec8ac999196 description=default/busybox/busybox id=b04c7363-6d77-4478-81d4-1d2edfbc9250 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bebac018a63f5218f21f0aae49f35dd846cc510b090ec30b1eac6558cbedcea2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	4afd721162283       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   bebac018a63f5       busybox                                                default
	1d70fbf699cd6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   75e3ef451c07d       coredns-66bc5c9577-dskxl                               kube-system
	609d11ef40010       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   8bbfa6e984396       storage-provisioner                                    kube-system
	47b2ee9311fc2       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    24 seconds ago      Running             kindnet-cni               0                   2ec9f369b7932       kindnet-zgcrn                                          kube-system
	3853c74aeea32       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                      26 seconds ago      Running             kube-proxy                0                   93d6d696e3f39       kube-proxy-mr7c8                                       kube-system
	1aac5be4a209e       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                      37 seconds ago      Running             kube-apiserver            0                   86dbf9c09d392       kube-apiserver-default-k8s-diff-port-717222            kube-system
	8e44233b0bed1       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      37 seconds ago      Running             etcd                      0                   83fac8c901a44       etcd-default-k8s-diff-port-717222                      kube-system
	8653b82dbe260       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                      37 seconds ago      Running             kube-scheduler            0                   cd5367d2f873e       kube-scheduler-default-k8s-diff-port-717222            kube-system
	bdc384498eefd       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                      37 seconds ago      Running             kube-controller-manager   0                   db6d81cf5c92a       kube-controller-manager-default-k8s-diff-port-717222   kube-system
	
	
	==> coredns [1d70fbf699cd642ce0850af421c4233ad1e842981c4cd42f00a6a41a538412fc] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34606 - 59797 "HINFO IN 769038341539765791.1381906198785584753. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.017076848s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-717222
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-717222
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=default-k8s-diff-port-717222
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_05_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:05:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-717222
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:05:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:05:33 +0000   Fri, 19 Dec 2025 03:04:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:05:33 +0000   Fri, 19 Dec 2025 03:04:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:05:33 +0000   Fri, 19 Dec 2025 03:04:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:05:33 +0000   Fri, 19 Dec 2025 03:05:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-717222
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                301b16dc-31c1-4466-a363-b4e4f9941cd5
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-dskxl                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-default-k8s-diff-port-717222                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-zgcrn                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-default-k8s-diff-port-717222             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-717222    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-mr7c8                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-default-k8s-diff-port-717222             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s (x8 over 38s)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x8 over 38s)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x8 over 38s)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientPID
	  Normal  Starting                 33s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s                kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s                kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s                kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node default-k8s-diff-port-717222 event: Registered Node default-k8s-diff-port-717222 in Controller
	  Normal  NodeReady                14s                kubelet          Node default-k8s-diff-port-717222 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [8e44233b0bed1a97b5662ff481c4fe3626a469b9169f270611ded66d69a739b1] <==
	{"level":"warn","ts":"2025-12-19T03:04:59.394051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.403393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.410982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.419181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.426636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.437029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.448853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.456569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.463475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.471168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.478870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.485426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.492921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.499966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.506677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.514779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.522373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.529505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.552485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.560029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:04:59.627433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53448","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T03:05:21.583473Z","caller":"traceutil/trace.go:172","msg":"trace[146397184] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"192.424647ms","start":"2025-12-19T03:05:21.391023Z","end":"2025-12-19T03:05:21.583448Z","steps":["trace[146397184] 'process raft request'  (duration: 128.309225ms)","trace[146397184] 'compare'  (duration: 64.00776ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:05:23.558734Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.377714ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:05:23.558803Z","caller":"traceutil/trace.go:172","msg":"trace[1841268840] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:457; }","duration":"146.515903ms","start":"2025-12-19T03:05:23.412273Z","end":"2025-12-19T03:05:23.558789Z","steps":["trace[1841268840] 'range keys from in-memory index tree'  (duration: 146.286628ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:33.607867Z","caller":"traceutil/trace.go:172","msg":"trace[71649944] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"107.540432ms","start":"2025-12-19T03:05:33.500306Z","end":"2025-12-19T03:05:33.607847Z","steps":["trace[71649944] 'process raft request'  (duration: 107.384396ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:05:35 up 48 min,  0 user,  load average: 7.85, 4.74, 2.83
	Linux default-k8s-diff-port-717222 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [47b2ee9311fc2a187f5ef6d9b2bc951cba514d7f564fb0a749df2f7ff6838334] <==
	I1219 03:05:10.492604       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1219 03:05:10.492940       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1219 03:05:10.493121       1 main.go:148] setting mtu 1500 for CNI 
	I1219 03:05:10.493147       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 03:05:10.493177       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T03:05:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 03:05:10.790901       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 03:05:10.790962       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 03:05:10.790975       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 03:05:10.791144       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 03:05:11.091118       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 03:05:11.091181       1 metrics.go:72] Registering metrics
	I1219 03:05:11.091316       1 controller.go:711] "Syncing nftables rules"
	I1219 03:05:20.802173       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:05:20.802240       1 main.go:301] handling current node
	I1219 03:05:30.793796       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:05:30.793872       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1aac5be4a209eb6275ad2045fa0fcba48df6beac6e85d49a3e1858cfabbd4f5a] <==
	I1219 03:05:00.172141       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1219 03:05:00.172160       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1219 03:05:00.172982       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1219 03:05:00.178323       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:05:00.180574       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:05:00.181218       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1219 03:05:00.375337       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1219 03:05:01.072639       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1219 03:05:01.076625       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1219 03:05:01.076644       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1219 03:05:01.635825       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:05:01.692807       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:05:01.778972       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1219 03:05:01.784914       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1219 03:05:01.785864       1 controller.go:667] quota admission added evaluator for: endpoints
	I1219 03:05:01.789810       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:05:02.087334       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1219 03:05:02.724647       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 03:05:02.733210       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1219 03:05:02.740396       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1219 03:05:07.845298       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:05:07.857723       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:05:08.146004       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:05:08.202855       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1219 03:05:32.897809       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8444->192.168.94.1:54088: use of closed network connection
	
	
	==> kube-controller-manager [bdc384498eefdce54dd3234088194dd809a0006a86d9abbc1b35d8b7358ee5ba] <==
	I1219 03:05:06.988303       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1219 03:05:06.988425       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 03:05:06.988666       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1219 03:05:06.988846       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 03:05:06.989483       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 03:05:06.990629       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1219 03:05:06.990667       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1219 03:05:06.993097       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1219 03:05:06.994260       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:05:06.994423       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:05:06.994446       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1219 03:05:06.994499       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1219 03:05:06.994538       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1219 03:05:06.994545       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1219 03:05:06.994552       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1219 03:05:06.995599       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1219 03:05:06.999879       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1219 03:05:07.003156       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1219 03:05:07.003854       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-717222" podCIDRs=["10.244.0.0/24"]
	I1219 03:05:07.151491       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1219 03:05:07.187224       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:05:07.187242       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 03:05:07.187249       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1219 03:05:07.251901       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:05:21.939609       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3853c74aeea328437af3d1f440a07cab20daa2a5969b9987453c55249b4a0f73] <==
	I1219 03:05:08.801028       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:05:08.872303       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:05:08.973061       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:05:08.973112       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1219 03:05:08.973206       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:05:09.002099       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:05:09.002227       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:05:09.012084       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:05:09.013017       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:05:09.013051       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:09.018414       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:05:09.018523       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:05:09.018616       1 config.go:200] "Starting service config controller"
	I1219 03:05:09.018645       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:05:09.018688       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:05:09.018730       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:05:09.018812       1 config.go:309] "Starting node config controller"
	I1219 03:05:09.018843       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:05:09.119250       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:05:09.119272       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:05:09.119294       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:05:09.119309       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [8653b82dbe260726e6a29910ad54cfc2953e8e03724c9bf0139e963e04141670] <==
	E1219 03:05:00.131073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 03:05:00.132236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 03:05:00.132345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 03:05:00.132460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1219 03:05:00.132588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 03:05:00.132606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1219 03:05:00.132610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 03:05:00.132678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 03:05:00.132775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1219 03:05:00.132972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1219 03:05:00.962034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 03:05:01.048083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 03:05:01.055486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1219 03:05:01.064814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 03:05:01.069024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 03:05:01.103232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 03:05:01.123747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 03:05:01.203140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 03:05:01.230098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1219 03:05:01.257420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1219 03:05:01.262731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 03:05:01.346020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 03:05:01.372620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1219 03:05:01.421060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1219 03:05:04.425717       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 03:05:03 default-k8s-diff-port-717222 kubelet[1337]: E1219 03:05:03.611882    1337 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-717222\" already exists" pod="kube-system/etcd-default-k8s-diff-port-717222"
	Dec 19 03:05:03 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:03.647851    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-717222" podStartSLOduration=1.647825397 podStartE2EDuration="1.647825397s" podCreationTimestamp="2025-12-19 03:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 03:05:03.62992589 +0000 UTC m=+1.137646806" watchObservedRunningTime="2025-12-19 03:05:03.647825397 +0000 UTC m=+1.155546311"
	Dec 19 03:05:03 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:03.648047    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-717222" podStartSLOduration=1.648035782 podStartE2EDuration="1.648035782s" podCreationTimestamp="2025-12-19 03:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 03:05:03.643162221 +0000 UTC m=+1.150883135" watchObservedRunningTime="2025-12-19 03:05:03.648035782 +0000 UTC m=+1.155756699"
	Dec 19 03:05:03 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:03.676855    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-717222" podStartSLOduration=1.6768133349999998 podStartE2EDuration="1.676813335s" podCreationTimestamp="2025-12-19 03:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 03:05:03.658106098 +0000 UTC m=+1.165827013" watchObservedRunningTime="2025-12-19 03:05:03.676813335 +0000 UTC m=+1.184534252"
	Dec 19 03:05:07 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:07.018072    1337 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 19 03:05:07 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:07.018898    1337 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 19 03:05:07 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:07.462929    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-717222" podStartSLOduration=5.462904708 podStartE2EDuration="5.462904708s" podCreationTimestamp="2025-12-19 03:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 03:05:03.679873645 +0000 UTC m=+1.187594559" watchObservedRunningTime="2025-12-19 03:05:07.462904708 +0000 UTC m=+4.970625622"
	Dec 19 03:05:08 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:08.307063    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a-xtables-lock\") pod \"kindnet-zgcrn\" (UID: \"9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a\") " pod="kube-system/kindnet-zgcrn"
	Dec 19 03:05:08 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:08.307128    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c6d5e13e-bf1d-4f00-8d1d-0711294f20f7-kube-proxy\") pod \"kube-proxy-mr7c8\" (UID: \"c6d5e13e-bf1d-4f00-8d1d-0711294f20f7\") " pod="kube-system/kube-proxy-mr7c8"
	Dec 19 03:05:08 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:08.307148    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6d5e13e-bf1d-4f00-8d1d-0711294f20f7-lib-modules\") pod \"kube-proxy-mr7c8\" (UID: \"c6d5e13e-bf1d-4f00-8d1d-0711294f20f7\") " pod="kube-system/kube-proxy-mr7c8"
	Dec 19 03:05:08 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:08.307172    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8np6n\" (UniqueName: \"kubernetes.io/projected/c6d5e13e-bf1d-4f00-8d1d-0711294f20f7-kube-api-access-8np6n\") pod \"kube-proxy-mr7c8\" (UID: \"c6d5e13e-bf1d-4f00-8d1d-0711294f20f7\") " pod="kube-system/kube-proxy-mr7c8"
	Dec 19 03:05:08 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:08.307211    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a-cni-cfg\") pod \"kindnet-zgcrn\" (UID: \"9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a\") " pod="kube-system/kindnet-zgcrn"
	Dec 19 03:05:08 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:08.307230    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a-lib-modules\") pod \"kindnet-zgcrn\" (UID: \"9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a\") " pod="kube-system/kindnet-zgcrn"
	Dec 19 03:05:08 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:08.307277    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96hr4\" (UniqueName: \"kubernetes.io/projected/9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a-kube-api-access-96hr4\") pod \"kindnet-zgcrn\" (UID: \"9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a\") " pod="kube-system/kindnet-zgcrn"
	Dec 19 03:05:08 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:08.307309    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6d5e13e-bf1d-4f00-8d1d-0711294f20f7-xtables-lock\") pod \"kube-proxy-mr7c8\" (UID: \"c6d5e13e-bf1d-4f00-8d1d-0711294f20f7\") " pod="kube-system/kube-proxy-mr7c8"
	Dec 19 03:05:10 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:10.622191    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mr7c8" podStartSLOduration=2.622166075 podStartE2EDuration="2.622166075s" podCreationTimestamp="2025-12-19 03:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 03:05:09.637100525 +0000 UTC m=+7.144821438" watchObservedRunningTime="2025-12-19 03:05:10.622166075 +0000 UTC m=+8.129886989"
	Dec 19 03:05:10 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:10.662125    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-zgcrn" podStartSLOduration=1.050709708 podStartE2EDuration="2.662098085s" podCreationTimestamp="2025-12-19 03:05:08 +0000 UTC" firstStartedPulling="2025-12-19 03:05:08.574684207 +0000 UTC m=+6.082405177" lastFinishedPulling="2025-12-19 03:05:10.186072648 +0000 UTC m=+7.693793554" observedRunningTime="2025-12-19 03:05:10.661778729 +0000 UTC m=+8.169499643" watchObservedRunningTime="2025-12-19 03:05:10.662098085 +0000 UTC m=+8.169818999"
	Dec 19 03:05:21 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:21.385019    1337 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 19 03:05:21 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:21.810018    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm7nv\" (UniqueName: \"kubernetes.io/projected/6e82652d-1118-425b-9dc8-2a0cc50bbb7b-kube-api-access-qm7nv\") pod \"coredns-66bc5c9577-dskxl\" (UID: \"6e82652d-1118-425b-9dc8-2a0cc50bbb7b\") " pod="kube-system/coredns-66bc5c9577-dskxl"
	Dec 19 03:05:21 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:21.810080    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/38c2d00a-9d6a-43a7-b9d5-f690dac30c87-tmp\") pod \"storage-provisioner\" (UID: \"38c2d00a-9d6a-43a7-b9d5-f690dac30c87\") " pod="kube-system/storage-provisioner"
	Dec 19 03:05:21 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:21.810104    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmgx6\" (UniqueName: \"kubernetes.io/projected/38c2d00a-9d6a-43a7-b9d5-f690dac30c87-kube-api-access-lmgx6\") pod \"storage-provisioner\" (UID: \"38c2d00a-9d6a-43a7-b9d5-f690dac30c87\") " pod="kube-system/storage-provisioner"
	Dec 19 03:05:21 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:21.810124    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e82652d-1118-425b-9dc8-2a0cc50bbb7b-config-volume\") pod \"coredns-66bc5c9577-dskxl\" (UID: \"6e82652d-1118-425b-9dc8-2a0cc50bbb7b\") " pod="kube-system/coredns-66bc5c9577-dskxl"
	Dec 19 03:05:22 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:22.671456    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dskxl" podStartSLOduration=14.671433457 podStartE2EDuration="14.671433457s" podCreationTimestamp="2025-12-19 03:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 03:05:22.671203971 +0000 UTC m=+20.178924908" watchObservedRunningTime="2025-12-19 03:05:22.671433457 +0000 UTC m=+20.179154370"
	Dec 19 03:05:24 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:24.766888    1337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.766862763 podStartE2EDuration="15.766862763s" podCreationTimestamp="2025-12-19 03:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 03:05:22.696894307 +0000 UTC m=+20.204615221" watchObservedRunningTime="2025-12-19 03:05:24.766862763 +0000 UTC m=+22.274583679"
	Dec 19 03:05:24 default-k8s-diff-port-717222 kubelet[1337]: I1219 03:05:24.830895    1337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj6wt\" (UniqueName: \"kubernetes.io/projected/a9f35053-e166-41af-99cf-2a293efdd88e-kube-api-access-qj6wt\") pod \"busybox\" (UID: \"a9f35053-e166-41af-99cf-2a293efdd88e\") " pod="default/busybox"
	
	
	==> storage-provisioner [609d11ef400103088da7ae53ebd6c92fc6e9271882e8670b717f60de98be15fa] <==
	I1219 03:05:22.070925       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1219 03:05:22.082443       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1219 03:05:22.082609       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1219 03:05:22.086285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:22.094296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1219 03:05:22.094682       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1219 03:05:22.095770       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9debd6c3-150e-4adf-a0fc-b415ea50a952", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-717222_5dcfd512-afeb-481c-b2c3-d2c6124cf533 became leader
	I1219 03:05:22.095879       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-717222_5dcfd512-afeb-481c-b2c3-d2c6124cf533!
	W1219 03:05:22.099269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:22.104608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1219 03:05:22.197043       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-717222_5dcfd512-afeb-481c-b2c3-d2c6124cf533!
	W1219 03:05:24.110798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:24.117910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:26.122333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:26.127913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:28.131408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:28.136836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:30.141142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:30.147568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:32.152149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:32.158047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:34.161826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:05:34.238775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-717222 -n default-k8s-diff-port-717222
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-717222 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-433330 -n old-k8s-version-433330
start_stop_delete_test.go:272: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:14:46.392720107 +0000 UTC m=+2996.112239494
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-433330
helpers_test.go:244: (dbg) docker inspect old-k8s-version-433330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18",
	        "Created": "2025-12-19T03:03:42.290394762Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 338430,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:05:00.142567023Z",
	            "FinishedAt": "2025-12-19T03:04:59.042546116Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18/hosts",
	        "LogPath": "/var/lib/docker/containers/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18-json.log",
	        "Name": "/old-k8s-version-433330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-433330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-433330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18",
	                "LowerDir": "/var/lib/docker/overlay2/6032c39fb2e99c6be5b0226eff9b2f93ff205fafe5352aa25ebce819f3079e50-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6032c39fb2e99c6be5b0226eff9b2f93ff205fafe5352aa25ebce819f3079e50/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6032c39fb2e99c6be5b0226eff9b2f93ff205fafe5352aa25ebce819f3079e50/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6032c39fb2e99c6be5b0226eff9b2f93ff205fafe5352aa25ebce819f3079e50/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-433330",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-433330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-433330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-433330",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-433330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "dccc35fac12f6f9c606670826d973be968de80e11b47147853405d102ecda025",
	            "SandboxKey": "/var/run/docker/netns/dccc35fac12f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-433330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ebf807015d65c8db1230e3a313a61194a5685b902dee458d727805bc340fe33d",
	                    "EndpointID": "a6443b6616b36367152fe2b3630db96df1ad95a1774c32a4f279e3a106c8f1e6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "8e:3f:cd:fb:94:67",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-433330",
	                        "ed00f1899233"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-433330 -n old-k8s-version-433330
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-433330 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-433330 logs -n 25: (1.201139197s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-821749 sudo containerd config dump                                                                                                                                                                                          │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo crio config                                                                                                                                                                                                     │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ delete  │ -p custom-flannel-821749                                                                                                                                                                                                                      │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-433330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-278042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ delete  │ -p disable-driver-mounts-507648                                                                                                                                                                                                               │ disable-driver-mounts-507648 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ stop    │ -p old-k8s-version-433330 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-278042 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-433330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-433330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p no-preload-278042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-278042 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p embed-certs-805185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p embed-certs-805185 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-717222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-717222 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-805185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-717222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:05:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:05:53.092301  352121 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:05:53.092394  352121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:53.092398  352121 out.go:374] Setting ErrFile to fd 2...
	I1219 03:05:53.092402  352121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:53.092674  352121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:05:53.093206  352121 out.go:368] Setting JSON to false
	I1219 03:05:53.094527  352121 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2904,"bootTime":1766110649,"procs":396,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:05:53.094626  352121 start.go:143] virtualization: kvm guest
	I1219 03:05:53.096521  352121 out.go:179] * [default-k8s-diff-port-717222] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:05:53.097989  352121 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:05:53.098033  352121 notify.go:221] Checking for updates...
	I1219 03:05:53.100179  352121 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:05:53.101370  352121 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:53.102535  352121 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:05:53.104239  352121 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:05:53.105473  352121 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:05:53.107110  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:53.107760  352121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:05:53.140217  352121 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:05:53.140357  352121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:53.211137  352121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 03:05:53.198937136 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:53.211231  352121 docker.go:319] overlay module found
	I1219 03:05:53.212919  352121 out.go:179] * Using the docker driver based on existing profile
	I1219 03:05:53.214070  352121 start.go:309] selected driver: docker
	I1219 03:05:53.214085  352121 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:53.214190  352121 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:05:53.214819  352121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:53.291099  352121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 03:05:53.277402722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:53.291489  352121 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:53.291541  352121 cni.go:84] Creating CNI manager for ""
	I1219 03:05:53.291616  352121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:53.291670  352121 start.go:353] cluster config:
	{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:53.293313  352121 out.go:179] * Starting "default-k8s-diff-port-717222" primary control-plane node in "default-k8s-diff-port-717222" cluster
	I1219 03:05:53.300833  352121 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:05:53.301994  352121 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:05:53.303248  352121 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:05:53.303295  352121 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 03:05:53.303311  352121 cache.go:65] Caching tarball of preloaded images
	I1219 03:05:53.303351  352121 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:05:53.303428  352121 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:05:53.303442  352121 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 03:05:53.303571  352121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json ...
	I1219 03:05:53.330621  352121 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:05:53.330652  352121 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:05:53.330671  352121 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:05:53.330767  352121 start.go:360] acquireMachinesLock for default-k8s-diff-port-717222: {Name:mk586c44d14a94c58ce6de7426a02f7319eb0716 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:53.330860  352121 start.go:364] duration metric: took 49.392µs to acquireMachinesLock for "default-k8s-diff-port-717222"
	I1219 03:05:53.330886  352121 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:05:53.330893  352121 fix.go:54] fixHost starting: 
	I1219 03:05:53.331229  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:53.353305  352121 fix.go:112] recreateIfNeeded on default-k8s-diff-port-717222: state=Stopped err=<nil>
	W1219 03:05:53.353340  352121 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:05:52.784975  350034 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:52.785039  350034 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:05:52.785129  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.788781  350034 addons.go:239] Setting addon default-storageclass=true in "embed-certs-805185"
	W1219 03:05:52.788860  350034 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:05:52.788932  350034 host.go:66] Checking if "embed-certs-805185" exists ...
	I1219 03:05:52.789603  350034 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:52.803693  350034 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:52.803884  350034 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:05:52.803963  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.829487  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.833372  350034 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:52.833419  350034 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:05:52.833502  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.838747  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.863871  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.921723  350034 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:52.937553  350034 node_ready.go:35] waiting up to 6m0s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:52.960476  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:52.967329  350034 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:05:52.987994  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:54.352944  350034 node_ready.go:49] node "embed-certs-805185" is "Ready"
	I1219 03:05:54.352990  350034 node_ready.go:38] duration metric: took 1.41538432s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:54.353006  350034 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:54.353065  350034 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:05:54.864404  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.903892508s)
	I1219 03:05:54.864464  350034 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.897096827s)
	I1219 03:05:54.864523  350034 api_server.go:72] duration metric: took 2.115189515s to wait for apiserver process to appear ...
	I1219 03:05:54.864541  350034 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:05:54.864542  350034 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:54.864474  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.876456755s)
	I1219 03:05:54.864569  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:54.868475  350034 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:05:54.869335  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:54.869359  350034 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:55.365237  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:55.371297  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:55.371332  350034 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:53.354901  352121 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-717222" ...
	I1219 03:05:53.354983  352121 cli_runner.go:164] Run: docker start default-k8s-diff-port-717222
	I1219 03:05:53.699823  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:53.723885  352121 kic.go:430] container "default-k8s-diff-port-717222" state is running.
	I1219 03:05:53.724437  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:53.748911  352121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json ...
	I1219 03:05:53.749250  352121 machine.go:94] provisionDockerMachine start ...
	I1219 03:05:53.749328  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:53.779816  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:53.780159  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:53.780174  352121 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:05:53.780830  352121 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1219 03:05:56.928761  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-717222
	
	I1219 03:05:56.928788  352121 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-717222"
	I1219 03:05:56.928865  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:56.949113  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:56.949333  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:56.949347  352121 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-717222 && echo "default-k8s-diff-port-717222" | sudo tee /etc/hostname
	I1219 03:05:57.104743  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-717222
	
	I1219 03:05:57.104815  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.123728  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:57.124049  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:57.124072  352121 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-717222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-717222/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-717222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:05:57.273322  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:05:57.273360  352121 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 03:05:57.273404  352121 ubuntu.go:190] setting up certificates
	I1219 03:05:57.273418  352121 provision.go:84] configureAuth start
	I1219 03:05:57.273481  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:57.295306  352121 provision.go:143] copyHostCerts
	I1219 03:05:57.295369  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem, removing ...
	I1219 03:05:57.295386  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem
	I1219 03:05:57.295456  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 03:05:57.295579  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem, removing ...
	I1219 03:05:57.295589  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem
	I1219 03:05:57.295628  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 03:05:57.295782  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem, removing ...
	I1219 03:05:57.295804  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem
	I1219 03:05:57.295852  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 03:05:57.295936  352121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-717222 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-717222 localhost minikube]
	I1219 03:05:57.394982  352121 provision.go:177] copyRemoteCerts
	I1219 03:05:57.395055  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:05:57.395092  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.415801  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:57.518597  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:05:57.537101  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:05:57.555959  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:05:57.575813  352121 provision.go:87] duration metric: took 302.381293ms to configureAuth
	I1219 03:05:57.575844  352121 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:05:57.576048  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:57.576149  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.595953  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:57.596182  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:57.596200  352121 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:05:58.066496  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:05:58.066522  352121 machine.go:97] duration metric: took 4.317251418s to provisionDockerMachine
	I1219 03:05:58.066537  352121 start.go:293] postStartSetup for "default-k8s-diff-port-717222" (driver="docker")
	I1219 03:05:58.066550  352121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:05:58.066636  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:05:58.066688  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.089735  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:55.753357  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:05:55.865655  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:55.871156  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1219 03:05:55.872281  350034 api_server.go:141] control plane version: v1.34.3
	I1219 03:05:55.872314  350034 api_server.go:131] duration metric: took 1.007762314s to wait for apiserver health ...
	I1219 03:05:55.872322  350034 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:55.876404  350034 system_pods.go:59] 8 kube-system pods found
	I1219 03:05:55.876440  350034 system_pods.go:61] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:55.876448  350034 system_pods.go:61] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:55.876455  350034 system_pods.go:61] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:55.876464  350034 system_pods.go:61] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:55.876469  350034 system_pods.go:61] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:55.876474  350034 system_pods.go:61] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:55.876484  350034 system_pods.go:61] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:55.876491  350034 system_pods.go:61] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:55.876501  350034 system_pods.go:74] duration metric: took 4.173475ms to wait for pod list to return data ...
	I1219 03:05:55.876508  350034 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:55.879480  350034 default_sa.go:45] found service account: "default"
	I1219 03:05:55.879505  350034 default_sa.go:55] duration metric: took 2.991473ms for default service account to be created ...
	I1219 03:05:55.879514  350034 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:55.882762  350034 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:55.882799  350034 system_pods.go:89] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:55.882811  350034 system_pods.go:89] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:55.882822  350034 system_pods.go:89] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:55.882831  350034 system_pods.go:89] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:55.882844  350034 system_pods.go:89] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:55.882860  350034 system_pods.go:89] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:55.882873  350034 system_pods.go:89] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:55.882885  350034 system_pods.go:89] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:55.882898  350034 system_pods.go:126] duration metric: took 3.377076ms to wait for k8s-apps to be running ...
	I1219 03:05:55.882911  350034 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:55.882965  350034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:58.664447  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (2.910960851s)
	I1219 03:05:58.664545  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:58.664639  350034 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.781656633s)
	I1219 03:05:58.664656  350034 system_svc.go:56] duration metric: took 2.781742979s WaitForService to wait for kubelet
	I1219 03:05:58.664665  350034 kubeadm.go:587] duration metric: took 5.915331625s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:58.664687  350034 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:58.675538  350034 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:58.675644  350034 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:58.675679  350034 node_conditions.go:105] duration metric: took 10.98619ms to run NodePressure ...
	I1219 03:05:58.675732  350034 start.go:242] waiting for startup goroutines ...
	I1219 03:05:58.853716  350034 addons.go:500] Verifying addon dashboard=true in "embed-certs-805185"
	I1219 03:05:58.854075  350034 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:58.881488  350034 out.go:179] * Verifying dashboard addon...
	I1219 03:05:58.197271  352121 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:05:58.201133  352121 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:05:58.201178  352121 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:05:58.201190  352121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 03:05:58.201247  352121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 03:05:58.201317  352121 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem -> 85362.pem in /etc/ssl/certs
	I1219 03:05:58.201402  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:05:58.208949  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:05:58.226900  352121 start.go:296] duration metric: took 160.349121ms for postStartSetup
	I1219 03:05:58.227004  352121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:05:58.227051  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.247047  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.350362  352121 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:05:58.356036  352121 fix.go:56] duration metric: took 5.025136862s for fixHost
	I1219 03:05:58.356065  352121 start.go:83] releasing machines lock for "default-k8s-diff-port-717222", held for 5.025188602s
	I1219 03:05:58.356141  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:58.375473  352121 ssh_runner.go:195] Run: cat /version.json
	I1219 03:05:58.375532  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.375568  352121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:05:58.375641  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.397142  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.397516  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.556811  352121 ssh_runner.go:195] Run: systemctl --version
	I1219 03:05:58.565677  352121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:05:58.614666  352121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:05:58.621187  352121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:05:58.621270  352121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:05:58.633014  352121 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:05:58.633047  352121 start.go:496] detecting cgroup driver to use...
	I1219 03:05:58.633088  352121 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:05:58.633137  352121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:05:58.657800  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:05:58.678804  352121 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:05:58.678883  352121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:05:58.705625  352121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:05:58.728926  352121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:05:58.831932  352121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:05:58.929486  352121 docker.go:234] disabling docker service ...
	I1219 03:05:58.929541  352121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:05:58.944770  352121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:05:58.958306  352121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:05:59.062259  352121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:05:59.150570  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:05:59.163650  352121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:05:59.178297  352121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:05:59.178363  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.187297  352121 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 03:05:59.187364  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.198433  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.208772  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.217632  352121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:05:59.226347  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.236018  352121 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.245884  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.255295  352121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:05:59.263398  352121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:05:59.271671  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:59.372273  352121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:05:59.546574  352121 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:05:59.546667  352121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:05:59.552176  352121 start.go:564] Will wait 60s for crictl version
	I1219 03:05:59.552257  352121 ssh_runner.go:195] Run: which crictl
	I1219 03:05:59.557121  352121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:05:59.591016  352121 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 03:05:59.591106  352121 ssh_runner.go:195] Run: crio --version
	I1219 03:05:59.628329  352121 ssh_runner.go:195] Run: crio --version
	I1219 03:05:59.666833  352121 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1219 03:05:58.884349  350034 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:05:58.887370  350034 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:05:58.887396  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.389654  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.888847  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.388430  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.668307  352121 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-717222 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:05:59.692145  352121 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1219 03:05:59.697760  352121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:59.712259  352121 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:05:59.712414  352121 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:05:59.712469  352121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:59.758952  352121 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:59.758977  352121 crio.go:433] Images already preloaded, skipping extraction
	I1219 03:05:59.759041  352121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:59.795008  352121 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:59.795037  352121 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:05:59.795046  352121 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.3 crio true true} ...
	I1219 03:05:59.795178  352121 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-717222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:05:59.795259  352121 ssh_runner.go:195] Run: crio config
	I1219 03:05:59.864102  352121 cni.go:84] Creating CNI manager for ""
	I1219 03:05:59.864128  352121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:59.864150  352121 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:05:59.864179  352121 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-717222 NodeName:default-k8s-diff-port-717222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:05:59.864362  352121 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-717222"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:05:59.864462  352121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:05:59.875851  352121 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:05:59.875922  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:05:59.884464  352121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1219 03:05:59.899134  352121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:05:59.915215  352121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1219 03:05:59.929131  352121 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:05:59.933193  352121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:59.944310  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:06:00.032535  352121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:06:00.050232  352121 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222 for IP: 192.168.94.2
	I1219 03:06:00.050255  352121 certs.go:195] generating shared ca certs ...
	I1219 03:06:00.050283  352121 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.050682  352121 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 03:06:00.050862  352121 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 03:06:00.050911  352121 certs.go:257] generating profile certs ...
	I1219 03:06:00.051065  352121 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/client.key
	I1219 03:06:00.051146  352121 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.key.a5493f25
	I1219 03:06:00.051208  352121 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.key
	I1219 03:06:00.051349  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem (1338 bytes)
	W1219 03:06:00.051384  352121 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536_empty.pem, impossibly tiny 0 bytes
	I1219 03:06:00.051393  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:06:00.051430  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:06:00.051457  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:06:00.051489  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 03:06:00.051550  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:06:00.053211  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:06:00.075839  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:06:00.100173  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:06:00.124618  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 03:06:00.149956  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:06:00.170353  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:06:00.189275  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:06:00.209284  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 03:06:00.228628  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:06:00.250172  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem --> /usr/share/ca-certificates/8536.pem (1338 bytes)
	I1219 03:06:00.275836  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /usr/share/ca-certificates/85362.pem (1708 bytes)
	I1219 03:06:00.299659  352121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:06:00.316102  352121 ssh_runner.go:195] Run: openssl version
	I1219 03:06:00.322905  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.332781  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:06:00.343197  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.348016  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.348075  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.393419  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:06:00.402299  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.411729  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8536.pem /etc/ssl/certs/8536.pem
	I1219 03:06:00.420351  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.424769  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:33 /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.424832  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.477146  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:06:00.485938  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.495504  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85362.pem /etc/ssl/certs/85362.pem
	I1219 03:06:00.504981  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.509807  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:33 /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.509870  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.553357  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:06:00.563012  352121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:06:00.568396  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:06:00.622821  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:06:00.682127  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:06:00.745366  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:06:00.806418  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:06:00.854164  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:06:00.896520  352121 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:06:00.896640  352121 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:06:00.896757  352121 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:06:00.932488  352121 cri.go:92] found id: "1340a2f59347d94cb67897c14b11276bb491943caa3b94ed7080db25e5d5dc78"
	I1219 03:06:00.932513  352121 cri.go:92] found id: "725faee3812c5d3136b764571b2c9b270517680beb590d21de31aad0b5d0b89a"
	I1219 03:06:00.932519  352121 cri.go:92] found id: "d2c496c53c69616e93f161ca59c69ebaa3d4de81e94460893270a29e321886eb"
	I1219 03:06:00.932523  352121 cri.go:92] found id: "0fb4e8910a64fc9fe34f2319ccb58566695768a8c137e01b391e3004481cb992"
	I1219 03:06:00.932538  352121 cri.go:92] found id: ""
	I1219 03:06:00.932585  352121 ssh_runner.go:195] Run: sudo runc list -f json
	W1219 03:06:00.949831  352121 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:06:00Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:06:00.949907  352121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:06:00.960453  352121 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:06:00.960473  352121 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:06:00.960520  352121 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:06:00.969747  352121 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:06:00.971295  352121 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-717222" does not appear in /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:06:00.972255  352121 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-4987/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-717222" cluster setting kubeconfig missing "default-k8s-diff-port-717222" context setting]
	I1219 03:06:00.973245  352121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.975092  352121 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:06:00.986333  352121 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1219 03:06:00.986370  352121 kubeadm.go:602] duration metric: took 25.891084ms to restartPrimaryControlPlane
	I1219 03:06:00.986381  352121 kubeadm.go:403] duration metric: took 89.870495ms to StartCluster
	I1219 03:06:00.986399  352121 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.986465  352121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:06:00.988984  352121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.989885  352121 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:06:00.990020  352121 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:06:00.990126  352121 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990162  352121 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-717222"
	W1219 03:06:00.990171  352121 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:06:00.990156  352121 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990202  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:00.990217  352121 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-717222"
	W1219 03:06:00.990229  352121 addons.go:248] addon dashboard should already be in state true
	I1219 03:06:00.990249  352121 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990136  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:06:00.990260  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:00.990288  352121 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-717222"
	I1219 03:06:00.990658  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.990728  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.990961  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.993354  352121 out.go:179] * Verifying Kubernetes components...
	I1219 03:06:00.994888  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:06:01.021617  352121 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:06:01.021640  352121 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:06:01.021711  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.025911  352121 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:06:01.027271  352121 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:06:01.027318  352121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:06:01.027479  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.027961  352121 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-717222"
	W1219 03:06:01.027983  352121 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:06:01.028009  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:01.028455  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:01.058853  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.066205  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.072839  352121 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:06:01.072864  352121 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:06:01.072921  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.100032  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.171523  352121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:06:01.187709  352121 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:06:01.188379  352121 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:06:01.193383  352121 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:06:01.197836  352121 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:06:01.203342  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:06:01.235359  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:06:02.386268  352121 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.188390942s)
	I1219 03:06:02.386368  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:06:03.037231  352121 node_ready.go:49] node "default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:03.037269  352121 node_ready.go:38] duration metric: took 1.848860443s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:06:03.037285  352121 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:06:03.037340  352121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:06:00.888463  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.390478  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.889172  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.392135  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.887803  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.395356  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.887597  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.388564  350034 kapi.go:107] duration metric: took 5.504214625s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:06:04.390500  350034 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-805185 addons enable metrics-server
	
	I1219 03:06:04.392729  350034 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1219 03:06:04.394312  350034 addons.go:546] duration metric: took 11.644922855s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:06:04.394358  350034 start.go:247] waiting for cluster config update ...
	I1219 03:06:04.394374  350034 start.go:256] writing updated cluster config ...
	I1219 03:06:04.394679  350034 ssh_runner.go:195] Run: rm -f paused
	I1219 03:06:04.399689  350034 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:04.404337  350034 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:03.829353  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.625958762s)
	I1219 03:06:03.829435  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.59404762s)
	I1219 03:06:06.147413  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.761002493s)
	I1219 03:06:06.147503  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:06:06.147635  352121 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.110275887s)
	I1219 03:06:06.147788  352121 api_server.go:72] duration metric: took 5.157862117s to wait for apiserver process to appear ...
	I1219 03:06:06.147803  352121 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:06:06.147822  352121 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:06:06.158489  352121 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1219 03:06:06.160663  352121 api_server.go:141] control plane version: v1.34.3
	I1219 03:06:06.160691  352121 api_server.go:131] duration metric: took 12.881794ms to wait for apiserver health ...
	I1219 03:06:06.160726  352121 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:06:06.167594  352121 system_pods.go:59] 8 kube-system pods found
	I1219 03:06:06.167645  352121 system_pods.go:61] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:06:06.167662  352121 system_pods.go:61] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:06:06.167671  352121 system_pods.go:61] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:06:06.167680  352121 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:06:06.167689  352121 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:06:06.167695  352121 system_pods.go:61] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:06:06.167733  352121 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:06:06.167740  352121 system_pods.go:61] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:06:06.167750  352121 system_pods.go:74] duration metric: took 7.015044ms to wait for pod list to return data ...
	I1219 03:06:06.167763  352121 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:06:06.172355  352121 default_sa.go:45] found service account: "default"
	I1219 03:06:06.172383  352121 default_sa.go:55] duration metric: took 4.612941ms for default service account to be created ...
	I1219 03:06:06.172394  352121 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:06:06.266672  352121 system_pods.go:86] 8 kube-system pods found
	I1219 03:06:06.266726  352121 system_pods.go:89] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:06:06.266739  352121 system_pods.go:89] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:06:06.266748  352121 system_pods.go:89] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:06:06.266766  352121 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:06:06.266780  352121 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:06:06.266794  352121 system_pods.go:89] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:06:06.266807  352121 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:06:06.266814  352121 system_pods.go:89] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:06:06.266828  352121 system_pods.go:126] duration metric: took 94.426368ms to wait for k8s-apps to be running ...
	I1219 03:06:06.266837  352121 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:06:06.266897  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:06:06.349488  352121 system_svc.go:56] duration metric: took 82.641061ms WaitForService to wait for kubelet
	I1219 03:06:06.349525  352121 kubeadm.go:587] duration metric: took 5.359602561s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:06:06.349549  352121 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:06:06.350103  352121 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-717222"
	I1219 03:06:06.350491  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:06.365789  352121 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:06:06.365826  352121 node_conditions.go:123] node cpu capacity is 8
	I1219 03:06:06.365844  352121 node_conditions.go:105] duration metric: took 16.288389ms to run NodePressure ...
	I1219 03:06:06.365861  352121 start.go:242] waiting for startup goroutines ...
	I1219 03:06:06.386126  352121 out.go:179] * Verifying dashboard addon...
	I1219 03:06:06.389114  352121 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:06:06.393439  352121 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:06:07.395667  352121 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:06:07.395691  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.895613  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:06:06.411012  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:08.413428  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:08.393607  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.894323  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.395092  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.892933  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.393259  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.894360  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.394632  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.893492  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.393217  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.892698  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.423636  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.893633  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.392860  352121 kapi.go:107] duration metric: took 8.003743298s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:06:14.394678  352121 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-717222 addons enable metrics-server
	
	I1219 03:06:14.396056  352121 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1219 03:06:10.911668  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:13.410917  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:15.411844  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:14.397218  352121 addons.go:546] duration metric: took 13.407198611s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:06:14.397266  352121 start.go:247] waiting for cluster config update ...
	I1219 03:06:14.397281  352121 start.go:256] writing updated cluster config ...
	I1219 03:06:14.397606  352121 ssh_runner.go:195] Run: rm -f paused
	I1219 03:06:14.402992  352121 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:14.409994  352121 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	W1219 03:06:16.416124  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:17.411983  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:19.909949  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:18.416198  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:20.916272  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:21.910435  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:24.410080  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:23.415501  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:25.416077  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:27.916104  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:26.910790  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:29.410901  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:30.410731  350034 pod_ready.go:94] pod "coredns-66bc5c9577-8gphx" is "Ready"
	I1219 03:06:30.410767  350034 pod_ready.go:86] duration metric: took 26.00640693s for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.413681  350034 pod_ready.go:83] waiting for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.418650  350034 pod_ready.go:94] pod "etcd-embed-certs-805185" is "Ready"
	I1219 03:06:30.418674  350034 pod_ready.go:86] duration metric: took 4.958889ms for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.420801  350034 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.424880  350034 pod_ready.go:94] pod "kube-apiserver-embed-certs-805185" is "Ready"
	I1219 03:06:30.424905  350034 pod_ready.go:86] duration metric: took 4.082079ms for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.427435  350034 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.608611  350034 pod_ready.go:94] pod "kube-controller-manager-embed-certs-805185" is "Ready"
	I1219 03:06:30.608639  350034 pod_ready.go:86] duration metric: took 181.178673ms for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.808837  350034 pod_ready.go:83] waiting for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.208761  350034 pod_ready.go:94] pod "kube-proxy-p8pqg" is "Ready"
	I1219 03:06:31.208794  350034 pod_ready.go:86] duration metric: took 399.926497ms for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.408111  350034 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.808631  350034 pod_ready.go:94] pod "kube-scheduler-embed-certs-805185" is "Ready"
	I1219 03:06:31.808661  350034 pod_ready.go:86] duration metric: took 400.520634ms for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.808676  350034 pod_ready.go:40] duration metric: took 27.408945135s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:31.854031  350034 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:06:31.855666  350034 out.go:179] * Done! kubectl is now configured to use "embed-certs-805185" cluster and "default" namespace by default
	W1219 03:06:29.916349  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:31.916589  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:34.416790  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:36.915948  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	I1219 03:06:37.416133  352121 pod_ready.go:94] pod "coredns-66bc5c9577-dskxl" is "Ready"
	I1219 03:06:37.416166  352121 pod_ready.go:86] duration metric: took 23.006142155s for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.418804  352121 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.423037  352121 pod_ready.go:94] pod "etcd-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.423066  352121 pod_ready.go:86] duration metric: took 4.235839ms for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.425227  352121 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.429099  352121 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.429133  352121 pod_ready.go:86] duration metric: took 3.885497ms for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.430894  352121 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.614208  352121 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.614234  352121 pod_ready.go:86] duration metric: took 183.319695ms for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.814884  352121 pod_ready.go:83] waiting for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.214236  352121 pod_ready.go:94] pod "kube-proxy-mr7c8" is "Ready"
	I1219 03:06:38.214264  352121 pod_ready.go:86] duration metric: took 399.351737ms for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.413883  352121 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.813953  352121 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:38.813980  352121 pod_ready.go:86] duration metric: took 400.070957ms for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.813992  352121 pod_ready.go:40] duration metric: took 24.410965975s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:38.857548  352121 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:06:38.860155  352121 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-717222" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.26889621Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.293615522Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc" id=742594fb-fe48-4b31-87af-b5a61cf7ee1b name=/runtime.v1.ImageService/PullImage
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.294882463Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2" id=b149fd9d-fd72-4e11-adb2-25e489e6bf82 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.296980103Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2/kubernetes-dashboard-metrics-scraper" id=19f10fdd-5916-41c8-a26a-2ff94d29c5dc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.297143775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.301522987Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.302174856Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.312945079Z" level=info msg="Created container 1a79f7aa9ddca8538f5c2e6b027ef80376da81a39357bcf46549ad8c98b71cf4: kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn/proxy" id=8f1aef5d-9910-4677-95e2-3ddd26dbad0a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.31363451Z" level=info msg="Starting container: 1a79f7aa9ddca8538f5c2e6b027ef80376da81a39357bcf46549ad8c98b71cf4" id=8be5d998-255e-4a4b-8020-90d42f9fb352 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.316425544Z" level=info msg="Started container" PID=1962 containerID=1a79f7aa9ddca8538f5c2e6b027ef80376da81a39357bcf46549ad8c98b71cf4 description=kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn/proxy id=8be5d998-255e-4a4b-8020-90d42f9fb352 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5b4016d036c099501205c1263d738aec355ca9ba0985ac0de1a6326f1ba60f4f
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.32575797Z" level=info msg="Created container 9757437ad1c1d294af43014db84d4705543c3ebf727fe8d8aa2bcc64dab4ebe2: kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2/kubernetes-dashboard-metrics-scraper" id=19f10fdd-5916-41c8-a26a-2ff94d29c5dc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.326784172Z" level=info msg="Starting container: 9757437ad1c1d294af43014db84d4705543c3ebf727fe8d8aa2bcc64dab4ebe2" id=c58dd769-fa87-40be-a2b3-85b8605bc579 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.329520518Z" level=info msg="Started container" PID=1967 containerID=9757437ad1c1d294af43014db84d4705543c3ebf727fe8d8aa2bcc64dab4ebe2 description=kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2/kubernetes-dashboard-metrics-scraper id=c58dd769-fa87-40be-a2b3-85b8605bc579 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b5b7f0901c4eba07cb72103c3ef6c2da1dd3e8c1ae0cbe501ab5646ede4e16ae
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.151028864Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cbd04026-4973-4fb2-a2f5-e1a0bcef1d04 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.152401329Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5d524859-0cd0-482d-8890-c3a0b5bfcadf name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.153497878Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=476303b0-0df5-44b1-9467-56549e89f9fa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.153634163Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.15821817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.158364577Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/fc5e20521cad9e4c78b487769b301845071ca906b069eb3bccc1e9a717925eaf/merged/etc/passwd: no such file or directory"
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.1583869Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fc5e20521cad9e4c78b487769b301845071ca906b069eb3bccc1e9a717925eaf/merged/etc/group: no such file or directory"
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.158596016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.189477862Z" level=info msg="Created container b58c35740f2bd58f8c06629e5842dc5bfb5070615f14eabac5641c2021fc5622: kube-system/storage-provisioner/storage-provisioner" id=476303b0-0df5-44b1-9467-56549e89f9fa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.190263305Z" level=info msg="Starting container: b58c35740f2bd58f8c06629e5842dc5bfb5070615f14eabac5641c2021fc5622" id=094675cc-8230-4bbb-9611-af75aa9fbdb9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.192298533Z" level=info msg="Started container" PID=3386 containerID=b58c35740f2bd58f8c06629e5842dc5bfb5070615f14eabac5641c2021fc5622 description=kube-system/storage-provisioner/storage-provisioner id=094675cc-8230-4bbb-9611-af75aa9fbdb9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0546164e8f444b2265480d306eeac5a7944c866d22f7a7daa5d4a8a97d59bd1
	Dec 19 03:10:06 old-k8s-version-433330 crio[565]: time="2025-12-19T03:10:06.979473429Z" level=info msg="Checking image status: registry.k8s.io/pause:3.9" id=88b602ee-9bb9-4765-ba4b-8f37a46dfeb9 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	b58c35740f2bd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           9 minutes ago       Running             storage-provisioner                    1                   c0546164e8f44       storage-provisioner                                     kube-system
	9757437ad1c1d       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   9 minutes ago       Running             kubernetes-dashboard-metrics-scraper   0                   b5b7f0901c4eb       kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2   kubernetes-dashboard
	1a79f7aa9ddca       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                           9 minutes ago       Running             proxy                                  0                   5b4016d036c09       kubernetes-dashboard-kong-f487b85cd-7vrxn               kubernetes-dashboard
	43a7239d34381       docker.io/library/kong@sha256:73ac10ce4d2c5b3b8b4acd6c8117b4e72d1a201d95be2d51aeae8324d776a108                             9 minutes ago       Exited              clear-stale-pid                        0                   5b4016d036c09       kubernetes-dashboard-kong-f487b85cd-7vrxn               kubernetes-dashboard
	c787e566a1357       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              9 minutes ago       Running             kubernetes-dashboard-auth              0                   1b21bd00ecbe5       kubernetes-dashboard-auth-96f55cbc9-q6w55               kubernetes-dashboard
	572a9a98a5b17       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               9 minutes ago       Running             kubernetes-dashboard-api               0                   2598675df2023       kubernetes-dashboard-api-6c85dd6d79-gplb7               kubernetes-dashboard
	162ae6553f9ec       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               9 minutes ago       Running             kubernetes-dashboard-web               0                   1832855b57889       kubernetes-dashboard-web-858bd7466-nt8k8                kubernetes-dashboard
	8040658b9f3ea       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                           9 minutes ago       Running             coredns                                0                   c68d596bc4c32       coredns-5dd5756b68-vp79f                                kube-system
	e0cd612dc1ee9       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                                           9 minutes ago       Running             busybox                                1                   a960ed231cfff       busybox                                                 default
	9243551aa2fc1       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                                           9 minutes ago       Running             kindnet-cni                            0                   83c7dbba43d07       kindnet-hm2sz                                           kube-system
	9a529209e91c7       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                                           9 minutes ago       Running             kube-proxy                             0                   2bfa6386c24f2       kube-proxy-wdrk8                                        kube-system
	4a2a86182d6e0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           9 minutes ago       Exited              storage-provisioner                    0                   c0546164e8f44       storage-provisioner                                     kube-system
	ba54120ef227f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                           9 minutes ago       Running             etcd                                   0                   e4fbd268e41d9       etcd-old-k8s-version-433330                             kube-system
	dca7ec4a11ad9       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                                           9 minutes ago       Running             kube-controller-manager                0                   2ebbf830bac83       kube-controller-manager-old-k8s-version-433330          kube-system
	6764bc2ee8b6d       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                                           9 minutes ago       Running             kube-scheduler                         0                   b8ce7eb1e0991       kube-scheduler-old-k8s-version-433330                   kube-system
	e80d5d62bfdcc       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                                           9 minutes ago       Running             kube-apiserver                         0                   5a193f007e64f       kube-apiserver-old-k8s-version-433330                   kube-system
	
	
	==> coredns [8040658b9f3eabbbb5ba47f09aef99df774aaa0bfb4e998b1fb75e4a9d80669f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41940 - 34117 "HINFO IN 2692397503380385834.233192437307976356. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.044493269s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-433330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-433330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=old-k8s-version-433330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_04_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:03:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-433330
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:14:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:10:48 +0000   Fri, 19 Dec 2025 03:03:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:10:48 +0000   Fri, 19 Dec 2025 03:03:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:10:48 +0000   Fri, 19 Dec 2025 03:03:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:10:48 +0000   Fri, 19 Dec 2025 03:04:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-433330
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                51a7519b-85cf-4ec7-8319-8a51b3632490
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-5dd5756b68-vp79f                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-old-k8s-version-433330                              100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-hm2sz                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-old-k8s-version-433330                    250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-old-k8s-version-433330           200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-wdrk8                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-old-k8s-version-433330                    100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-api-6c85dd6d79-gplb7                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9m24s
	  kubernetes-dashboard        kubernetes-dashboard-auth-96f55cbc9-q6w55                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9m24s
	  kubernetes-dashboard        kubernetes-dashboard-kong-f487b85cd-7vrxn                0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m24s
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9m24s
	  kubernetes-dashboard        kubernetes-dashboard-web-858bd7466-nt8k8                 100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1250m (15%)  1100m (13%)
	  memory             1020Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m36s                  kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node old-k8s-version-433330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node old-k8s-version-433330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           10m                    node-controller  Node old-k8s-version-433330 event: Registered Node old-k8s-version-433330 in Controller
	  Normal  NodeReady                10m                    kubelet          Node old-k8s-version-433330 status is now: NodeReady
	  Normal  Starting                 9m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m40s (x8 over 9m41s)  kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m40s (x8 over 9m41s)  kubelet          Node old-k8s-version-433330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m40s (x8 over 9m41s)  kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m24s                  node-controller  Node old-k8s-version-433330 event: Registered Node old-k8s-version-433330 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [ba54120ef227f793b4872b17bf830934ba52ead424688a987fd2abd98fa2cc0e] <==
	{"level":"info","ts":"2025-12-19T03:05:09.111764Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T03:05:09.111787Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T03:05:09.112507Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T03:05:09.113314Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-19T03:05:23.365687Z","caller":"traceutil/trace.go:171","msg":"trace[1127249932] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"101.175159ms","start":"2025-12-19T03:05:23.264483Z","end":"2025-12-19T03:05:23.365658Z","steps":["trace[1127249932] 'process raft request'  (duration: 101.018627ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.511782Z","caller":"traceutil/trace.go:171","msg":"trace[648081539] transaction","detail":"{read_only:false; response_revision:635; number_of_response:1; }","duration":"101.92354ms","start":"2025-12-19T03:05:23.409833Z","end":"2025-12-19T03:05:23.511757Z","steps":["trace[648081539] 'process raft request'  (duration: 101.274955ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.716798Z","caller":"traceutil/trace.go:171","msg":"trace[1286006389] transaction","detail":"{read_only:false; response_revision:637; number_of_response:1; }","duration":"111.325708ms","start":"2025-12-19T03:05:23.605446Z","end":"2025-12-19T03:05:23.716772Z","steps":["trace[1286006389] 'process raft request'  (duration: 111.154063ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.762567Z","caller":"traceutil/trace.go:171","msg":"trace[1170228424] transaction","detail":"{read_only:false; response_revision:638; number_of_response:1; }","duration":"156.711191ms","start":"2025-12-19T03:05:23.605773Z","end":"2025-12-19T03:05:23.762484Z","steps":["trace[1170228424] 'process raft request'  (duration: 156.477047ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.76258Z","caller":"traceutil/trace.go:171","msg":"trace[176958629] transaction","detail":"{read_only:false; response_revision:640; number_of_response:1; }","duration":"155.653437ms","start":"2025-12-19T03:05:23.606903Z","end":"2025-12-19T03:05:23.762556Z","steps":["trace[176958629] 'process raft request'  (duration: 155.495851ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.762606Z","caller":"traceutil/trace.go:171","msg":"trace[11901299] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"154.359966ms","start":"2025-12-19T03:05:23.608234Z","end":"2025-12-19T03:05:23.762594Z","steps":["trace[11901299] 'process raft request'  (duration: 154.193134ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:05:23.762855Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.14879ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:05:23.76292Z","caller":"traceutil/trace.go:171","msg":"trace[491680101] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:641; }","duration":"100.257204ms","start":"2025-12-19T03:05:23.662645Z","end":"2025-12-19T03:05:23.762902Z","steps":["trace[491680101] 'agreement among raft nodes before linearized reading'  (duration: 100.093535ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.763103Z","caller":"traceutil/trace.go:171","msg":"trace[1326394039] transaction","detail":"{read_only:false; response_revision:639; number_of_response:1; }","duration":"156.274686ms","start":"2025-12-19T03:05:23.606816Z","end":"2025-12-19T03:05:23.763091Z","steps":["trace[1326394039] 'process raft request'  (duration: 155.543051ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.926373Z","caller":"traceutil/trace.go:171","msg":"trace[923941046] linearizableReadLoop","detail":"{readStateIndex:670; appliedIndex:668; }","duration":"163.896791ms","start":"2025-12-19T03:05:23.762458Z","end":"2025-12-19T03:05:23.926354Z","steps":["trace[923941046] 'read index received'  (duration: 90.331723ms)","trace[923941046] 'applied index is now lower than readState.Index'  (duration: 73.564544ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:05:23.926443Z","caller":"traceutil/trace.go:171","msg":"trace[1947040731] transaction","detail":"{read_only:false; response_revision:642; number_of_response:1; }","duration":"205.888361ms","start":"2025-12-19T03:05:23.720531Z","end":"2025-12-19T03:05:23.926419Z","steps":["trace[1947040731] 'process raft request'  (duration: 132.202751ms)","trace[1947040731] 'compare'  (duration: 73.474481ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:05:23.92647Z","caller":"traceutil/trace.go:171","msg":"trace[719632072] transaction","detail":"{read_only:false; response_revision:643; number_of_response:1; }","duration":"203.800384ms","start":"2025-12-19T03:05:23.722655Z","end":"2025-12-19T03:05:23.926455Z","steps":["trace[719632072] 'process raft request'  (duration: 203.652153ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:05:23.926492Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.716096ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kubernetes-dashboard/\" range_end:\"/registry/limitranges/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:05:23.926529Z","caller":"traceutil/trace.go:171","msg":"trace[291568890] range","detail":"{range_begin:/registry/limitranges/kubernetes-dashboard/; range_end:/registry/limitranges/kubernetes-dashboard0; response_count:0; response_revision:643; }","duration":"204.766821ms","start":"2025-12-19T03:05:23.721752Z","end":"2025-12-19T03:05:23.926519Z","steps":["trace[291568890] 'agreement among raft nodes before linearized reading'  (duration: 204.695193ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.950771Z","caller":"traceutil/trace.go:171","msg":"trace[910369966] transaction","detail":"{read_only:false; response_revision:645; number_of_response:1; }","duration":"179.377478ms","start":"2025-12-19T03:05:23.77138Z","end":"2025-12-19T03:05:23.950757Z","steps":["trace[910369966] 'process raft request'  (duration: 179.260563ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.950784Z","caller":"traceutil/trace.go:171","msg":"trace[4968190] transaction","detail":"{read_only:false; response_revision:644; number_of_response:1; }","duration":"179.447416ms","start":"2025-12-19T03:05:23.771287Z","end":"2025-12-19T03:05:23.950734Z","steps":["trace[4968190] 'process raft request'  (duration: 179.24612ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.951094Z","caller":"traceutil/trace.go:171","msg":"trace[108964002] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"179.505335ms","start":"2025-12-19T03:05:23.771577Z","end":"2025-12-19T03:05:23.951082Z","steps":["trace[108964002] 'process raft request'  (duration: 179.104746ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.951137Z","caller":"traceutil/trace.go:171","msg":"trace[652577346] transaction","detail":"{read_only:false; response_revision:647; number_of_response:1; }","duration":"176.993248ms","start":"2025-12-19T03:05:23.774131Z","end":"2025-12-19T03:05:23.951124Z","steps":["trace[652577346] 'process raft request'  (duration: 176.75032ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:05:23.951195Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.30836ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:05:23.951226Z","caller":"traceutil/trace.go:171","msg":"trace[1368537699] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:647; }","duration":"183.528611ms","start":"2025-12-19T03:05:23.767688Z","end":"2025-12-19T03:05:23.951216Z","steps":["trace[1368537699] 'agreement among raft nodes before linearized reading'  (duration: 183.469758ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:34.236332Z","caller":"traceutil/trace.go:171","msg":"trace[532828479] transaction","detail":"{read_only:false; response_revision:733; number_of_response:1; }","duration":"124.186623ms","start":"2025-12-19T03:05:34.112115Z","end":"2025-12-19T03:05:34.236302Z","steps":["trace[532828479] 'process raft request'  (duration: 124.016196ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:14:47 up 57 min,  0 user,  load average: 0.55, 1.09, 1.76
	Linux old-k8s-version-433330 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9243551aa2fc190ba88910a889833487b7b7643a56fe7be4cf6f3b57fe4c6fb8] <==
	I1219 03:12:41.952810       1 main.go:301] handling current node
	I1219 03:12:51.948815       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:12:51.948853       1 main.go:301] handling current node
	I1219 03:13:01.951461       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:13:01.951498       1 main.go:301] handling current node
	I1219 03:13:11.948831       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:13:11.948861       1 main.go:301] handling current node
	I1219 03:13:21.949942       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:13:21.949977       1 main.go:301] handling current node
	I1219 03:13:31.944793       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:13:31.944822       1 main.go:301] handling current node
	I1219 03:13:41.944937       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:13:41.944984       1 main.go:301] handling current node
	I1219 03:13:51.947907       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:13:51.947944       1 main.go:301] handling current node
	I1219 03:14:01.952854       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:14:01.952909       1 main.go:301] handling current node
	I1219 03:14:11.948030       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:14:11.948066       1 main.go:301] handling current node
	I1219 03:14:21.944780       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:14:21.944838       1 main.go:301] handling current node
	I1219 03:14:31.952516       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:14:31.952553       1 main.go:301] handling current node
	I1219 03:14:41.946810       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:14:41.946843       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e80d5d62bfdccfdfe3b1ea4a0e9ada6c25b66f5f5a6a80697856ea062adbe100] <==
	I1219 03:05:12.917102       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:05:12.925470       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:05:15.299162       1 controller.go:624] quota admission added evaluator for: namespaces
	I1219 03:05:15.318644       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1219 03:05:15.345766       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:05:15.351042       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:05:15.364786       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.149.234"}
	I1219 03:05:15.368990       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.105.24.29"}
	I1219 03:05:15.376216       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.102.30.164"}
	I1219 03:05:15.381013       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.110.254.107"}
	I1219 03:05:15.385108       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.97.226.188"}
	I1219 03:05:15.390967       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1219 03:05:23.377371       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:05:23.377441       1 controller.go:624] quota admission added evaluator for: endpoints
	I1219 03:05:23.604717       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1219 03:10:10.583004       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:10:10.583125       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:10:10.583190       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:10:10.583344       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:10:10.583413       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:10:10.583485       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:10:10.583543       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:10:10.583600       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:10:10.583658       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:10:10.583735       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	
	
	==> kube-controller-manager [dca7ec4a11ad9581ca73f7dc0a44569b981b375142761b6af7e7d3724cdbe386] <==
	I1219 03:05:29.137946       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-api-6c85dd6d79" duration="10.500982ms"
	I1219 03:05:29.138772       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-api-6c85dd6d79" duration="222.04µs"
	I1219 03:05:30.139560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-auth-96f55cbc9" duration="10.738008ms"
	I1219 03:05:30.141370       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-auth-96f55cbc9" duration="230.761µs"
	I1219 03:05:35.145518       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd" duration="124.735µs"
	I1219 03:05:36.153341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479" duration="7.771826ms"
	I1219 03:05:36.153487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479" duration="81.765µs"
	I1219 03:05:36.161499       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd" duration="142.354µs"
	I1219 03:05:44.124877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.783071ms"
	I1219 03:05:44.124969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.031µs"
	I1219 03:05:44.322554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd" duration="10.292955ms"
	I1219 03:05:44.322813       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd" duration="137.021µs"
	I1219 03:05:53.457987       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongplugins.configuration.konghq.com"
	I1219 03:05:53.458044       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongupstreampolicies.configuration.konghq.com"
	I1219 03:05:53.458064       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="tcpingresses.configuration.konghq.com"
	I1219 03:05:53.458080       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="ingressclassparameterses.configuration.konghq.com"
	I1219 03:05:53.458106       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongconsumers.configuration.konghq.com"
	I1219 03:05:53.458129       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongingresses.configuration.konghq.com"
	I1219 03:05:53.458159       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="udpingresses.configuration.konghq.com"
	I1219 03:05:53.458185       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongconsumergroups.configuration.konghq.com"
	I1219 03:05:53.458213       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongcustomentities.configuration.konghq.com"
	I1219 03:05:53.458314       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1219 03:05:53.658752       1 shared_informer.go:318] Caches are synced for resource quota
	I1219 03:05:53.873190       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1219 03:05:53.973659       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-proxy [9a529209e91c73dd4753533a0684e92d16e448ef5aed74f450f82867ec17304c] <==
	I1219 03:05:11.436432       1 server_others.go:69] "Using iptables proxy"
	I1219 03:05:11.452009       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1219 03:05:11.479225       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:05:11.482560       1 server_others.go:152] "Using iptables Proxier"
	I1219 03:05:11.482604       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1219 03:05:11.482625       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1219 03:05:11.482679       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1219 03:05:11.483072       1 server.go:846] "Version info" version="v1.28.0"
	I1219 03:05:11.483108       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:11.485106       1 config.go:97] "Starting endpoint slice config controller"
	I1219 03:05:11.485126       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1219 03:05:11.485951       1 config.go:315] "Starting node config controller"
	I1219 03:05:11.486004       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1219 03:05:11.485951       1 config.go:188] "Starting service config controller"
	I1219 03:05:11.486179       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1219 03:05:11.585764       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1219 03:05:11.587020       1 shared_informer.go:318] Caches are synced for node config
	I1219 03:05:11.587059       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [6764bc2ee8b6d353f059a22bdc0c53e92e10b6672c29a9f618bd8d9812741d3b] <==
	I1219 03:05:08.072216       1 serving.go:348] Generated self-signed cert in-memory
	W1219 03:05:10.585445       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:05:10.585508       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:05:10.585524       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:05:10.585535       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:05:10.628537       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1219 03:05:10.628629       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:10.631418       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1219 03:05:10.631571       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:10.633792       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1219 03:05:10.631594       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1219 03:05:10.734781       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062051     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmzc2\" (UniqueName: \"kubernetes.io/projected/970184f3-748e-4083-93e1-27215e7d3544-kube-api-access-hmzc2\") pod \"kubernetes-dashboard-api-6c85dd6d79-gplb7\" (UID: \"970184f3-748e-4083-93e1-27215e7d3544\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-6c85dd6d79-gplb7"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062114     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jp56\" (UniqueName: \"kubernetes.io/projected/c53e26af-d9fd-4efc-9354-3b3e505b50f1-kube-api-access-7jp56\") pod \"kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2\" (UID: \"c53e26af-d9fd-4efc-9354-3b3e505b50f1\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062154     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5d10317b-526d-41f3-8584-7612a5cbf9ef-tmp-volume\") pod \"kubernetes-dashboard-web-858bd7466-nt8k8\" (UID: \"5d10317b-526d-41f3-8584-7612a5cbf9ef\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-858bd7466-nt8k8"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062245     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f28l2\" (UniqueName: \"kubernetes.io/projected/5d10317b-526d-41f3-8584-7612a5cbf9ef-kube-api-access-f28l2\") pod \"kubernetes-dashboard-web-858bd7466-nt8k8\" (UID: \"5d10317b-526d-41f3-8584-7612a5cbf9ef\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-858bd7466-nt8k8"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062320     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c53e26af-d9fd-4efc-9354-3b3e505b50f1-tmp-volume\") pod \"kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2\" (UID: \"c53e26af-d9fd-4efc-9354-3b3e505b50f1\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062411     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrwqf\" (UniqueName: \"kubernetes.io/projected/583637fe-b99f-4b55-8173-e40ef125a4da-kube-api-access-lrwqf\") pod \"kubernetes-dashboard-auth-96f55cbc9-q6w55\" (UID: \"583637fe-b99f-4b55-8173-e40ef125a4da\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-96f55cbc9-q6w55"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062450     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-prefix-dir\" (UniqueName: \"kubernetes.io/empty-dir/b6d4ec1c-37c9-45af-84aa-2246d710edf0-kubernetes-dashboard-kong-prefix-dir\") pod \"kubernetes-dashboard-kong-f487b85cd-7vrxn\" (UID: \"b6d4ec1c-37c9-45af-84aa-2246d710edf0\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062475     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6d4ec1c-37c9-45af-84aa-2246d710edf0-kubernetes-dashboard-kong-tmp\") pod \"kubernetes-dashboard-kong-f487b85cd-7vrxn\" (UID: \"b6d4ec1c-37c9-45af-84aa-2246d710edf0\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062493     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/970184f3-748e-4083-93e1-27215e7d3544-tmp-volume\") pod \"kubernetes-dashboard-api-6c85dd6d79-gplb7\" (UID: \"970184f3-748e-4083-93e1-27215e7d3544\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-6c85dd6d79-gplb7"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062547     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/583637fe-b99f-4b55-8173-e40ef125a4da-tmp-volume\") pod \"kubernetes-dashboard-auth-96f55cbc9-q6w55\" (UID: \"583637fe-b99f-4b55-8173-e40ef125a4da\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-96f55cbc9-q6w55"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062611     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kong-custom-dbless-config-volume\" (UniqueName: \"kubernetes.io/configmap/b6d4ec1c-37c9-45af-84aa-2246d710edf0-kong-custom-dbless-config-volume\") pod \"kubernetes-dashboard-kong-f487b85cd-7vrxn\" (UID: \"b6d4ec1c-37c9-45af-84aa-2246d710edf0\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn"
	Dec 19 03:05:27 old-k8s-version-433330 kubelet[727]: I1219 03:05:27.257035     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:27 old-k8s-version-433330 kubelet[727]: I1219 03:05:27.257133     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:28 old-k8s-version-433330 kubelet[727]: I1219 03:05:28.110504     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-web-858bd7466-nt8k8" podStartSLOduration=2.1424808889999998 podCreationTimestamp="2025-12-19 03:05:23 +0000 UTC" firstStartedPulling="2025-12-19 03:05:24.288808133 +0000 UTC m=+17.406061880" lastFinishedPulling="2025-12-19 03:05:27.256749566 +0000 UTC m=+20.374003326" observedRunningTime="2025-12-19 03:05:28.109420313 +0000 UTC m=+21.226674073" watchObservedRunningTime="2025-12-19 03:05:28.110422335 +0000 UTC m=+21.227676096"
	Dec 19 03:05:28 old-k8s-version-433330 kubelet[727]: I1219 03:05:28.215638     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:28 old-k8s-version-433330 kubelet[727]: I1219 03:05:28.215739     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:29 old-k8s-version-433330 kubelet[727]: I1219 03:05:29.086317     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:29 old-k8s-version-433330 kubelet[727]: I1219 03:05:29.086398     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:30 old-k8s-version-433330 kubelet[727]: I1219 03:05:30.129411     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-api-6c85dd6d79-gplb7" podStartSLOduration=3.221513351 podCreationTimestamp="2025-12-19 03:05:23 +0000 UTC" firstStartedPulling="2025-12-19 03:05:24.307578658 +0000 UTC m=+17.424832408" lastFinishedPulling="2025-12-19 03:05:28.215417358 +0000 UTC m=+21.332671100" observedRunningTime="2025-12-19 03:05:29.130889061 +0000 UTC m=+22.248142823" watchObservedRunningTime="2025-12-19 03:05:30.129352043 +0000 UTC m=+23.246605805"
	Dec 19 03:05:30 old-k8s-version-433330 kubelet[727]: I1219 03:05:30.130193     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-auth-96f55cbc9-q6w55" podStartSLOduration=2.356310917 podCreationTimestamp="2025-12-19 03:05:23 +0000 UTC" firstStartedPulling="2025-12-19 03:05:24.31224463 +0000 UTC m=+17.429498372" lastFinishedPulling="2025-12-19 03:05:29.086067921 +0000 UTC m=+22.203321673" observedRunningTime="2025-12-19 03:05:30.128668409 +0000 UTC m=+23.245922169" watchObservedRunningTime="2025-12-19 03:05:30.130134218 +0000 UTC m=+23.247387978"
	Dec 19 03:05:35 old-k8s-version-433330 kubelet[727]: I1219 03:05:35.294232     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:35 old-k8s-version-433330 kubelet[727]: I1219 03:05:35.294310     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:36 old-k8s-version-433330 kubelet[727]: I1219 03:05:36.145317     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2" podStartSLOduration=2.170852672 podCreationTimestamp="2025-12-19 03:05:23 +0000 UTC" firstStartedPulling="2025-12-19 03:05:24.319586522 +0000 UTC m=+17.436840275" lastFinishedPulling="2025-12-19 03:05:35.293995871 +0000 UTC m=+28.411249625" observedRunningTime="2025-12-19 03:05:36.145033222 +0000 UTC m=+29.262286982" watchObservedRunningTime="2025-12-19 03:05:36.145262022 +0000 UTC m=+29.262515784"
	Dec 19 03:05:36 old-k8s-version-433330 kubelet[727]: I1219 03:05:36.161013     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn" podStartSLOduration=2.986982841 podCreationTimestamp="2025-12-19 03:05:23 +0000 UTC" firstStartedPulling="2025-12-19 03:05:24.31920326 +0000 UTC m=+17.436457054" lastFinishedPulling="2025-12-19 03:05:34.493165004 +0000 UTC m=+27.610418746" observedRunningTime="2025-12-19 03:05:36.16087964 +0000 UTC m=+29.278133404" watchObservedRunningTime="2025-12-19 03:05:36.160944533 +0000 UTC m=+29.278198294"
	Dec 19 03:05:42 old-k8s-version-433330 kubelet[727]: I1219 03:05:42.150477     727 scope.go:117] "RemoveContainer" containerID="4a2a86182d6e0257a9f9f666cddbee5bfd7ae7855be029a79247541fbec34a4d"
	
	
	==> kubernetes-dashboard [162ae6553f9ecd77ec53fa67c114215b76b2baa657cb81e65ce4554f0273417d] <==
	I1219 03:05:27.332655       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:05:27.393367       1 init.go:48] Using in-cluster config
	I1219 03:05:27.393589       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [572a9a98a5b172d9fdd8fb35ee68c37988f906c342e70e89e9bc008440b9c471] <==
	I1219 03:05:28.320430       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:05:28.320512       1 init.go:49] Using in-cluster config
	I1219 03:05:28.320694       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:05:28.320747       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:05:28.320756       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:05:28.320762       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:05:28.327903       1 main.go:119] "Successful initial request to the apiserver" version="v1.28.0"
	I1219 03:05:28.327931       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:05:28.332767       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 03:05:28.336184       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	I1219 03:05:58.341672       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [9757437ad1c1d294af43014db84d4705543c3ebf727fe8d8aa2bcc64dab4ebe2] <==
	E1219 03:12:35.366584       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:13:35.365604       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:14:35.365831       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	10.244.0.1 - - [19/Dec/2025:03:12:04 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:12:14 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:12:24 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:12:28 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:12:34 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:12:44 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:12:54 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:12:58 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:13:04 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:13:14 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:13:24 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:13:28 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:13:34 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:13:44 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:13:54 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:13:58 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:14:04 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:14:14 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:14:24 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:14:28 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:14:34 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:14:44 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	
	
	==> kubernetes-dashboard [c787e566a13574b826852efdc66cad3575fec6831fdc3a03d85531ffb5a730c9] <==
	I1219 03:05:29.223480       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:05:29.223546       1 init.go:49] Using in-cluster config
	I1219 03:05:29.223660       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [4a2a86182d6e0257a9f9f666cddbee5bfd7ae7855be029a79247541fbec34a4d] <==
	I1219 03:05:11.393839       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:05:41.397217       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b58c35740f2bd58f8c06629e5842dc5bfb5070615f14eabac5641c2021fc5622] <==
	I1219 03:05:42.205301       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1219 03:05:42.214869       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1219 03:05:42.214917       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1219 03:05:59.616530       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1219 03:05:59.616620       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eca1d2cd-fec8-4561-9433-a93751f8f3f7", APIVersion:"v1", ResourceVersion:"774", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-433330_fa64f0b5-7eb2-42bf-8fc2-3317af087ee3 became leader
	I1219 03:05:59.616726       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-433330_fa64f0b5-7eb2-42bf-8fc2-3317af087ee3!
	I1219 03:05:59.716964       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-433330_fa64f0b5-7eb2-42bf-8fc2-3317af087ee3!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-433330 -n old-k8s-version-433330
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-433330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 03:05:52.635811    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:05:52.641849    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:05:52.652163    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:05:52.673224    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-278042 -n no-preload-278042
start_stop_delete_test.go:272: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:14:50.244296116 +0000 UTC m=+2999.963815502
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-278042
helpers_test.go:244: (dbg) docker inspect no-preload-278042:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35",
	        "Created": "2025-12-19T03:03:43.244016686Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 339111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:05:01.069592419Z",
	            "FinishedAt": "2025-12-19T03:05:00.08601805Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35/hostname",
	        "HostsPath": "/var/lib/docker/containers/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35/hosts",
	        "LogPath": "/var/lib/docker/containers/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35-json.log",
	        "Name": "/no-preload-278042",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-278042:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-278042",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35",
	                "LowerDir": "/var/lib/docker/overlay2/1fa6da707f89a44d8d74b00c5a1e05754941243c2c7fe2df12c57972e765ab6a-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1fa6da707f89a44d8d74b00c5a1e05754941243c2c7fe2df12c57972e765ab6a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1fa6da707f89a44d8d74b00c5a1e05754941243c2c7fe2df12c57972e765ab6a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1fa6da707f89a44d8d74b00c5a1e05754941243c2c7fe2df12c57972e765ab6a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-278042",
	                "Source": "/var/lib/docker/volumes/no-preload-278042/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-278042",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-278042",
	                "name.minikube.sigs.k8s.io": "no-preload-278042",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "86d771358686193a8ee27ccd7dd8113a32471ee83b7a9b27de2361ca35da19bf",
	            "SandboxKey": "/var/run/docker/netns/86d771358686",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-278042": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "40e663ebb9c92fe8e9b5d1c06f073100d83df79efa76e295e52399b291babbbc",
	                    "EndpointID": "8aa1f1b0831c873e8bd4b8eb538f83b636c1962501683e75418947d1eb28c78e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "7e:f0:a4:c4:bd:57",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-278042",
	                        "c49a965a7d8d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-278042 -n no-preload-278042
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-278042 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-278042 logs -n 25: (1.187809572s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-821749 sudo containerd config dump                                                                                                                                                                                          │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo crio config                                                                                                                                                                                                     │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ delete  │ -p custom-flannel-821749                                                                                                                                                                                                                      │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-433330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-278042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ delete  │ -p disable-driver-mounts-507648                                                                                                                                                                                                               │ disable-driver-mounts-507648 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ stop    │ -p old-k8s-version-433330 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-278042 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-433330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-433330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p no-preload-278042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-278042 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p embed-certs-805185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p embed-certs-805185 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-717222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-717222 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-805185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-717222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:05:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:05:53.092301  352121 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:05:53.092394  352121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:53.092398  352121 out.go:374] Setting ErrFile to fd 2...
	I1219 03:05:53.092402  352121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:53.092674  352121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:05:53.093206  352121 out.go:368] Setting JSON to false
	I1219 03:05:53.094527  352121 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2904,"bootTime":1766110649,"procs":396,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:05:53.094626  352121 start.go:143] virtualization: kvm guest
	I1219 03:05:53.096521  352121 out.go:179] * [default-k8s-diff-port-717222] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:05:53.097989  352121 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:05:53.098033  352121 notify.go:221] Checking for updates...
	I1219 03:05:53.100179  352121 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:05:53.101370  352121 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:53.102535  352121 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:05:53.104239  352121 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:05:53.105473  352121 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:05:53.107110  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:53.107760  352121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:05:53.140217  352121 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:05:53.140357  352121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:53.211137  352121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 03:05:53.198937136 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:53.211231  352121 docker.go:319] overlay module found
	I1219 03:05:53.212919  352121 out.go:179] * Using the docker driver based on existing profile
	I1219 03:05:53.214070  352121 start.go:309] selected driver: docker
	I1219 03:05:53.214085  352121 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:53.214190  352121 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:05:53.214819  352121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:53.291099  352121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 03:05:53.277402722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:53.291489  352121 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:53.291541  352121 cni.go:84] Creating CNI manager for ""
	I1219 03:05:53.291616  352121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:53.291670  352121 start.go:353] cluster config:
	{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:53.293313  352121 out.go:179] * Starting "default-k8s-diff-port-717222" primary control-plane node in "default-k8s-diff-port-717222" cluster
	I1219 03:05:53.300833  352121 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:05:53.301994  352121 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:05:53.303248  352121 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:05:53.303295  352121 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 03:05:53.303311  352121 cache.go:65] Caching tarball of preloaded images
	I1219 03:05:53.303351  352121 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:05:53.303428  352121 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:05:53.303442  352121 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 03:05:53.303571  352121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json ...
	I1219 03:05:53.330621  352121 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:05:53.330652  352121 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:05:53.330671  352121 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:05:53.330767  352121 start.go:360] acquireMachinesLock for default-k8s-diff-port-717222: {Name:mk586c44d14a94c58ce6de7426a02f7319eb0716 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:53.330860  352121 start.go:364] duration metric: took 49.392µs to acquireMachinesLock for "default-k8s-diff-port-717222"
	I1219 03:05:53.330886  352121 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:05:53.330893  352121 fix.go:54] fixHost starting: 
	I1219 03:05:53.331229  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:53.353305  352121 fix.go:112] recreateIfNeeded on default-k8s-diff-port-717222: state=Stopped err=<nil>
	W1219 03:05:53.353340  352121 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:05:52.784975  350034 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:52.785039  350034 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:05:52.785129  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.788781  350034 addons.go:239] Setting addon default-storageclass=true in "embed-certs-805185"
	W1219 03:05:52.788860  350034 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:05:52.788932  350034 host.go:66] Checking if "embed-certs-805185" exists ...
	I1219 03:05:52.789603  350034 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:52.803693  350034 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:52.803884  350034 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:05:52.803963  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.829487  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.833372  350034 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:52.833419  350034 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:05:52.833502  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.838747  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.863871  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.921723  350034 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:52.937553  350034 node_ready.go:35] waiting up to 6m0s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:52.960476  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:52.967329  350034 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:05:52.987994  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:54.352944  350034 node_ready.go:49] node "embed-certs-805185" is "Ready"
	I1219 03:05:54.352990  350034 node_ready.go:38] duration metric: took 1.41538432s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:54.353006  350034 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:54.353065  350034 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:05:54.864404  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.903892508s)
	I1219 03:05:54.864464  350034 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.897096827s)
	I1219 03:05:54.864523  350034 api_server.go:72] duration metric: took 2.115189515s to wait for apiserver process to appear ...
	I1219 03:05:54.864541  350034 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:05:54.864542  350034 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:54.864474  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.876456755s)
	I1219 03:05:54.864569  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:54.868475  350034 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:05:54.869335  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:54.869359  350034 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:55.365237  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:55.371297  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:55.371332  350034 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:53.354901  352121 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-717222" ...
	I1219 03:05:53.354983  352121 cli_runner.go:164] Run: docker start default-k8s-diff-port-717222
	I1219 03:05:53.699823  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:53.723885  352121 kic.go:430] container "default-k8s-diff-port-717222" state is running.
	I1219 03:05:53.724437  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:53.748911  352121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json ...
	I1219 03:05:53.749250  352121 machine.go:94] provisionDockerMachine start ...
	I1219 03:05:53.749328  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:53.779816  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:53.780159  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:53.780174  352121 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:05:53.780830  352121 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1219 03:05:56.928761  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-717222
	
	I1219 03:05:56.928788  352121 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-717222"
	I1219 03:05:56.928865  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:56.949113  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:56.949333  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:56.949347  352121 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-717222 && echo "default-k8s-diff-port-717222" | sudo tee /etc/hostname
	I1219 03:05:57.104743  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-717222
	
	I1219 03:05:57.104815  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.123728  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:57.124049  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:57.124072  352121 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-717222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-717222/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-717222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:05:57.273322  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:05:57.273360  352121 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 03:05:57.273404  352121 ubuntu.go:190] setting up certificates
	I1219 03:05:57.273418  352121 provision.go:84] configureAuth start
	I1219 03:05:57.273481  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:57.295306  352121 provision.go:143] copyHostCerts
	I1219 03:05:57.295369  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem, removing ...
	I1219 03:05:57.295386  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem
	I1219 03:05:57.295456  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 03:05:57.295579  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem, removing ...
	I1219 03:05:57.295589  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem
	I1219 03:05:57.295628  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 03:05:57.295782  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem, removing ...
	I1219 03:05:57.295804  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem
	I1219 03:05:57.295852  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 03:05:57.295936  352121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-717222 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-717222 localhost minikube]
	I1219 03:05:57.394982  352121 provision.go:177] copyRemoteCerts
	I1219 03:05:57.395055  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:05:57.395092  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.415801  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:57.518597  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:05:57.537101  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:05:57.555959  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:05:57.575813  352121 provision.go:87] duration metric: took 302.381293ms to configureAuth
	I1219 03:05:57.575844  352121 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:05:57.576048  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:57.576149  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.595953  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:57.596182  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:57.596200  352121 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:05:58.066496  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:05:58.066522  352121 machine.go:97] duration metric: took 4.317251418s to provisionDockerMachine
	I1219 03:05:58.066537  352121 start.go:293] postStartSetup for "default-k8s-diff-port-717222" (driver="docker")
	I1219 03:05:58.066550  352121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:05:58.066636  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:05:58.066688  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.089735  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:55.753357  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:05:55.865655  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:55.871156  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1219 03:05:55.872281  350034 api_server.go:141] control plane version: v1.34.3
	I1219 03:05:55.872314  350034 api_server.go:131] duration metric: took 1.007762314s to wait for apiserver health ...
	I1219 03:05:55.872322  350034 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:55.876404  350034 system_pods.go:59] 8 kube-system pods found
	I1219 03:05:55.876440  350034 system_pods.go:61] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:55.876448  350034 system_pods.go:61] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:55.876455  350034 system_pods.go:61] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:55.876464  350034 system_pods.go:61] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:55.876469  350034 system_pods.go:61] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:55.876474  350034 system_pods.go:61] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:55.876484  350034 system_pods.go:61] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:55.876491  350034 system_pods.go:61] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:55.876501  350034 system_pods.go:74] duration metric: took 4.173475ms to wait for pod list to return data ...
	I1219 03:05:55.876508  350034 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:55.879480  350034 default_sa.go:45] found service account: "default"
	I1219 03:05:55.879505  350034 default_sa.go:55] duration metric: took 2.991473ms for default service account to be created ...
	I1219 03:05:55.879514  350034 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:55.882762  350034 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:55.882799  350034 system_pods.go:89] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:55.882811  350034 system_pods.go:89] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:55.882822  350034 system_pods.go:89] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:55.882831  350034 system_pods.go:89] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:55.882844  350034 system_pods.go:89] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:55.882860  350034 system_pods.go:89] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:55.882873  350034 system_pods.go:89] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:55.882885  350034 system_pods.go:89] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:55.882898  350034 system_pods.go:126] duration metric: took 3.377076ms to wait for k8s-apps to be running ...
	I1219 03:05:55.882911  350034 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:55.882965  350034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:58.664447  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (2.910960851s)
	I1219 03:05:58.664545  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:58.664639  350034 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.781656633s)
	I1219 03:05:58.664656  350034 system_svc.go:56] duration metric: took 2.781742979s WaitForService to wait for kubelet
	I1219 03:05:58.664665  350034 kubeadm.go:587] duration metric: took 5.915331625s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:58.664687  350034 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:58.675538  350034 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:58.675644  350034 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:58.675679  350034 node_conditions.go:105] duration metric: took 10.98619ms to run NodePressure ...
	I1219 03:05:58.675732  350034 start.go:242] waiting for startup goroutines ...
	I1219 03:05:58.853716  350034 addons.go:500] Verifying addon dashboard=true in "embed-certs-805185"
	I1219 03:05:58.854075  350034 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:58.881488  350034 out.go:179] * Verifying dashboard addon...
	I1219 03:05:58.197271  352121 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:05:58.201133  352121 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:05:58.201178  352121 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:05:58.201190  352121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 03:05:58.201247  352121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 03:05:58.201317  352121 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem -> 85362.pem in /etc/ssl/certs
	I1219 03:05:58.201402  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:05:58.208949  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:05:58.226900  352121 start.go:296] duration metric: took 160.349121ms for postStartSetup
	I1219 03:05:58.227004  352121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:05:58.227051  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.247047  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.350362  352121 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:05:58.356036  352121 fix.go:56] duration metric: took 5.025136862s for fixHost
	I1219 03:05:58.356065  352121 start.go:83] releasing machines lock for "default-k8s-diff-port-717222", held for 5.025188602s
	I1219 03:05:58.356141  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:58.375473  352121 ssh_runner.go:195] Run: cat /version.json
	I1219 03:05:58.375532  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.375568  352121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:05:58.375641  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.397142  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.397516  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.556811  352121 ssh_runner.go:195] Run: systemctl --version
	I1219 03:05:58.565677  352121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:05:58.614666  352121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:05:58.621187  352121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:05:58.621270  352121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:05:58.633014  352121 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:05:58.633047  352121 start.go:496] detecting cgroup driver to use...
	I1219 03:05:58.633088  352121 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:05:58.633137  352121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:05:58.657800  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:05:58.678804  352121 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:05:58.678883  352121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:05:58.705625  352121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:05:58.728926  352121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:05:58.831932  352121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:05:58.929486  352121 docker.go:234] disabling docker service ...
	I1219 03:05:58.929541  352121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:05:58.944770  352121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:05:58.958306  352121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:05:59.062259  352121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:05:59.150570  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:05:59.163650  352121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:05:59.178297  352121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:05:59.178363  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.187297  352121 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 03:05:59.187364  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.198433  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.208772  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.217632  352121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:05:59.226347  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.236018  352121 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.245884  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.255295  352121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:05:59.263398  352121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:05:59.271671  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:59.372273  352121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:05:59.546574  352121 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:05:59.546667  352121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:05:59.552176  352121 start.go:564] Will wait 60s for crictl version
	I1219 03:05:59.552257  352121 ssh_runner.go:195] Run: which crictl
	I1219 03:05:59.557121  352121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:05:59.591016  352121 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 03:05:59.591106  352121 ssh_runner.go:195] Run: crio --version
	I1219 03:05:59.628329  352121 ssh_runner.go:195] Run: crio --version
	I1219 03:05:59.666833  352121 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1219 03:05:58.884349  350034 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:05:58.887370  350034 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:05:58.887396  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.389654  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.888847  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.388430  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.668307  352121 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-717222 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:05:59.692145  352121 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1219 03:05:59.697760  352121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:59.712259  352121 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:05:59.712414  352121 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:05:59.712469  352121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:59.758952  352121 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:59.758977  352121 crio.go:433] Images already preloaded, skipping extraction
	I1219 03:05:59.759041  352121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:59.795008  352121 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:59.795037  352121 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:05:59.795046  352121 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.3 crio true true} ...
	I1219 03:05:59.795178  352121 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-717222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:05:59.795259  352121 ssh_runner.go:195] Run: crio config
	I1219 03:05:59.864102  352121 cni.go:84] Creating CNI manager for ""
	I1219 03:05:59.864128  352121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:59.864150  352121 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:05:59.864179  352121 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-717222 NodeName:default-k8s-diff-port-717222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:05:59.864362  352121 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-717222"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:05:59.864462  352121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:05:59.875851  352121 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:05:59.875922  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:05:59.884464  352121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1219 03:05:59.899134  352121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:05:59.915215  352121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1219 03:05:59.929131  352121 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:05:59.933193  352121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:59.944310  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:06:00.032535  352121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:06:00.050232  352121 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222 for IP: 192.168.94.2
	I1219 03:06:00.050255  352121 certs.go:195] generating shared ca certs ...
	I1219 03:06:00.050283  352121 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.050682  352121 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 03:06:00.050862  352121 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 03:06:00.050911  352121 certs.go:257] generating profile certs ...
	I1219 03:06:00.051065  352121 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/client.key
	I1219 03:06:00.051146  352121 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.key.a5493f25
	I1219 03:06:00.051208  352121 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.key
	I1219 03:06:00.051349  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem (1338 bytes)
	W1219 03:06:00.051384  352121 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536_empty.pem, impossibly tiny 0 bytes
	I1219 03:06:00.051393  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:06:00.051430  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:06:00.051457  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:06:00.051489  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 03:06:00.051550  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:06:00.053211  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:06:00.075839  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:06:00.100173  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:06:00.124618  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 03:06:00.149956  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:06:00.170353  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:06:00.189275  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:06:00.209284  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 03:06:00.228628  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:06:00.250172  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem --> /usr/share/ca-certificates/8536.pem (1338 bytes)
	I1219 03:06:00.275836  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /usr/share/ca-certificates/85362.pem (1708 bytes)
	I1219 03:06:00.299659  352121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:06:00.316102  352121 ssh_runner.go:195] Run: openssl version
	I1219 03:06:00.322905  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.332781  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:06:00.343197  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.348016  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.348075  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.393419  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:06:00.402299  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.411729  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8536.pem /etc/ssl/certs/8536.pem
	I1219 03:06:00.420351  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.424769  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:33 /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.424832  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.477146  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:06:00.485938  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.495504  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85362.pem /etc/ssl/certs/85362.pem
	I1219 03:06:00.504981  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.509807  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:33 /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.509870  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.553357  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:06:00.563012  352121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:06:00.568396  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:06:00.622821  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:06:00.682127  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:06:00.745366  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:06:00.806418  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:06:00.854164  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:06:00.896520  352121 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:06:00.896640  352121 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:06:00.896757  352121 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:06:00.932488  352121 cri.go:92] found id: "1340a2f59347d94cb67897c14b11276bb491943caa3b94ed7080db25e5d5dc78"
	I1219 03:06:00.932513  352121 cri.go:92] found id: "725faee3812c5d3136b764571b2c9b270517680beb590d21de31aad0b5d0b89a"
	I1219 03:06:00.932519  352121 cri.go:92] found id: "d2c496c53c69616e93f161ca59c69ebaa3d4de81e94460893270a29e321886eb"
	I1219 03:06:00.932523  352121 cri.go:92] found id: "0fb4e8910a64fc9fe34f2319ccb58566695768a8c137e01b391e3004481cb992"
	I1219 03:06:00.932538  352121 cri.go:92] found id: ""
	I1219 03:06:00.932585  352121 ssh_runner.go:195] Run: sudo runc list -f json
	W1219 03:06:00.949831  352121 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:06:00Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:06:00.949907  352121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:06:00.960453  352121 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:06:00.960473  352121 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:06:00.960520  352121 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:06:00.969747  352121 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:06:00.971295  352121 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-717222" does not appear in /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:06:00.972255  352121 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-4987/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-717222" cluster setting kubeconfig missing "default-k8s-diff-port-717222" context setting]
	I1219 03:06:00.973245  352121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.975092  352121 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:06:00.986333  352121 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1219 03:06:00.986370  352121 kubeadm.go:602] duration metric: took 25.891084ms to restartPrimaryControlPlane
	I1219 03:06:00.986381  352121 kubeadm.go:403] duration metric: took 89.870495ms to StartCluster
	I1219 03:06:00.986399  352121 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.986465  352121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:06:00.988984  352121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.989885  352121 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:06:00.990020  352121 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:06:00.990126  352121 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990162  352121 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-717222"
	W1219 03:06:00.990171  352121 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:06:00.990156  352121 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990202  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:00.990217  352121 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-717222"
	W1219 03:06:00.990229  352121 addons.go:248] addon dashboard should already be in state true
	I1219 03:06:00.990249  352121 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990136  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:06:00.990260  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:00.990288  352121 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-717222"
	I1219 03:06:00.990658  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.990728  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.990961  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.993354  352121 out.go:179] * Verifying Kubernetes components...
	I1219 03:06:00.994888  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:06:01.021617  352121 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:06:01.021640  352121 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:06:01.021711  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.025911  352121 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:06:01.027271  352121 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:06:01.027318  352121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:06:01.027479  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.027961  352121 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-717222"
	W1219 03:06:01.027983  352121 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:06:01.028009  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:01.028455  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:01.058853  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.066205  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.072839  352121 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:06:01.072864  352121 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:06:01.072921  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.100032  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.171523  352121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:06:01.187709  352121 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:06:01.188379  352121 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:06:01.193383  352121 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:06:01.197836  352121 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:06:01.203342  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:06:01.235359  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:06:02.386268  352121 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.188390942s)
	I1219 03:06:02.386368  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:06:03.037231  352121 node_ready.go:49] node "default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:03.037269  352121 node_ready.go:38] duration metric: took 1.848860443s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:06:03.037285  352121 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:06:03.037340  352121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:06:00.888463  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.390478  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.889172  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.392135  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.887803  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.395356  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.887597  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.388564  350034 kapi.go:107] duration metric: took 5.504214625s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:06:04.390500  350034 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-805185 addons enable metrics-server
	
	I1219 03:06:04.392729  350034 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1219 03:06:04.394312  350034 addons.go:546] duration metric: took 11.644922855s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:06:04.394358  350034 start.go:247] waiting for cluster config update ...
	I1219 03:06:04.394374  350034 start.go:256] writing updated cluster config ...
	I1219 03:06:04.394679  350034 ssh_runner.go:195] Run: rm -f paused
	I1219 03:06:04.399689  350034 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:04.404337  350034 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:03.829353  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.625958762s)
	I1219 03:06:03.829435  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.59404762s)
	I1219 03:06:06.147413  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.761002493s)
	I1219 03:06:06.147503  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:06:06.147635  352121 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.110275887s)
	I1219 03:06:06.147788  352121 api_server.go:72] duration metric: took 5.157862117s to wait for apiserver process to appear ...
	I1219 03:06:06.147803  352121 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:06:06.147822  352121 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:06:06.158489  352121 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1219 03:06:06.160663  352121 api_server.go:141] control plane version: v1.34.3
	I1219 03:06:06.160691  352121 api_server.go:131] duration metric: took 12.881794ms to wait for apiserver health ...
	I1219 03:06:06.160726  352121 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:06:06.167594  352121 system_pods.go:59] 8 kube-system pods found
	I1219 03:06:06.167645  352121 system_pods.go:61] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:06:06.167662  352121 system_pods.go:61] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:06:06.167671  352121 system_pods.go:61] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:06:06.167680  352121 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:06:06.167689  352121 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:06:06.167695  352121 system_pods.go:61] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:06:06.167733  352121 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:06:06.167740  352121 system_pods.go:61] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:06:06.167750  352121 system_pods.go:74] duration metric: took 7.015044ms to wait for pod list to return data ...
	I1219 03:06:06.167763  352121 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:06:06.172355  352121 default_sa.go:45] found service account: "default"
	I1219 03:06:06.172383  352121 default_sa.go:55] duration metric: took 4.612941ms for default service account to be created ...
	I1219 03:06:06.172394  352121 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:06:06.266672  352121 system_pods.go:86] 8 kube-system pods found
	I1219 03:06:06.266726  352121 system_pods.go:89] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:06:06.266739  352121 system_pods.go:89] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:06:06.266748  352121 system_pods.go:89] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:06:06.266766  352121 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:06:06.266780  352121 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:06:06.266794  352121 system_pods.go:89] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:06:06.266807  352121 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:06:06.266814  352121 system_pods.go:89] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:06:06.266828  352121 system_pods.go:126] duration metric: took 94.426368ms to wait for k8s-apps to be running ...
	I1219 03:06:06.266837  352121 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:06:06.266897  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:06:06.349488  352121 system_svc.go:56] duration metric: took 82.641061ms WaitForService to wait for kubelet
	I1219 03:06:06.349525  352121 kubeadm.go:587] duration metric: took 5.359602561s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:06:06.349549  352121 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:06:06.350103  352121 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-717222"
	I1219 03:06:06.350491  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:06.365789  352121 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:06:06.365826  352121 node_conditions.go:123] node cpu capacity is 8
	I1219 03:06:06.365844  352121 node_conditions.go:105] duration metric: took 16.288389ms to run NodePressure ...
	I1219 03:06:06.365861  352121 start.go:242] waiting for startup goroutines ...
	I1219 03:06:06.386126  352121 out.go:179] * Verifying dashboard addon...
	I1219 03:06:06.389114  352121 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:06:06.393439  352121 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:06:07.395667  352121 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:06:07.395691  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.895613  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:06:06.411012  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:08.413428  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:08.393607  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.894323  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.395092  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.892933  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.393259  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.894360  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.394632  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.893492  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.393217  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.892698  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.423636  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.893633  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.392860  352121 kapi.go:107] duration metric: took 8.003743298s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:06:14.394678  352121 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-717222 addons enable metrics-server
	
	I1219 03:06:14.396056  352121 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1219 03:06:10.911668  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:13.410917  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:15.411844  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:14.397218  352121 addons.go:546] duration metric: took 13.407198611s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:06:14.397266  352121 start.go:247] waiting for cluster config update ...
	I1219 03:06:14.397281  352121 start.go:256] writing updated cluster config ...
	I1219 03:06:14.397606  352121 ssh_runner.go:195] Run: rm -f paused
	I1219 03:06:14.402992  352121 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:14.409994  352121 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	W1219 03:06:16.416124  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:17.411983  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:19.909949  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:18.416198  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:20.916272  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:21.910435  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:24.410080  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:23.415501  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:25.416077  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:27.916104  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:26.910790  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:29.410901  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:30.410731  350034 pod_ready.go:94] pod "coredns-66bc5c9577-8gphx" is "Ready"
	I1219 03:06:30.410767  350034 pod_ready.go:86] duration metric: took 26.00640693s for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.413681  350034 pod_ready.go:83] waiting for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.418650  350034 pod_ready.go:94] pod "etcd-embed-certs-805185" is "Ready"
	I1219 03:06:30.418674  350034 pod_ready.go:86] duration metric: took 4.958889ms for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.420801  350034 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.424880  350034 pod_ready.go:94] pod "kube-apiserver-embed-certs-805185" is "Ready"
	I1219 03:06:30.424905  350034 pod_ready.go:86] duration metric: took 4.082079ms for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.427435  350034 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.608611  350034 pod_ready.go:94] pod "kube-controller-manager-embed-certs-805185" is "Ready"
	I1219 03:06:30.608639  350034 pod_ready.go:86] duration metric: took 181.178673ms for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.808837  350034 pod_ready.go:83] waiting for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.208761  350034 pod_ready.go:94] pod "kube-proxy-p8pqg" is "Ready"
	I1219 03:06:31.208794  350034 pod_ready.go:86] duration metric: took 399.926497ms for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.408111  350034 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.808631  350034 pod_ready.go:94] pod "kube-scheduler-embed-certs-805185" is "Ready"
	I1219 03:06:31.808661  350034 pod_ready.go:86] duration metric: took 400.520634ms for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.808676  350034 pod_ready.go:40] duration metric: took 27.408945135s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:31.854031  350034 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:06:31.855666  350034 out.go:179] * Done! kubectl is now configured to use "embed-certs-805185" cluster and "default" namespace by default
	W1219 03:06:29.916349  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:31.916589  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:34.416790  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:36.915948  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	I1219 03:06:37.416133  352121 pod_ready.go:94] pod "coredns-66bc5c9577-dskxl" is "Ready"
	I1219 03:06:37.416166  352121 pod_ready.go:86] duration metric: took 23.006142155s for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.418804  352121 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.423037  352121 pod_ready.go:94] pod "etcd-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.423066  352121 pod_ready.go:86] duration metric: took 4.235839ms for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.425227  352121 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.429099  352121 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.429133  352121 pod_ready.go:86] duration metric: took 3.885497ms for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.430894  352121 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.614208  352121 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.614234  352121 pod_ready.go:86] duration metric: took 183.319695ms for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.814884  352121 pod_ready.go:83] waiting for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.214236  352121 pod_ready.go:94] pod "kube-proxy-mr7c8" is "Ready"
	I1219 03:06:38.214264  352121 pod_ready.go:86] duration metric: took 399.351737ms for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.413883  352121 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.813953  352121 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:38.813980  352121 pod_ready.go:86] duration metric: took 400.070957ms for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.813992  352121 pod_ready.go:40] duration metric: took 24.410965975s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:38.857548  352121 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:06:38.860155  352121 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-717222" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.736394898Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=717982ce-b0aa-47e4-97b9-7ccc9a3d471e name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.737528512Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=75c156d7-ee04-4c56-8f11-b0efea376553 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.737669801Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.742166616Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.742306458Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4ebf7307f813aac6a8d23d828e41e3d28400a9be6b5a1e96c9c7ac302e20c6f7/merged/etc/passwd: no such file or directory"
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.742328757Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4ebf7307f813aac6a8d23d828e41e3d28400a9be6b5a1e96c9c7ac302e20c6f7/merged/etc/group: no such file or directory"
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.742530495Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.773812294Z" level=info msg="Created container 7d6861325db2a3ac5ccac816c8e37a3daba04c6ecb4c268229d0eed2b47c364f: kube-system/storage-provisioner/storage-provisioner" id=75c156d7-ee04-4c56-8f11-b0efea376553 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.774507779Z" level=info msg="Starting container: 7d6861325db2a3ac5ccac816c8e37a3daba04c6ecb4c268229d0eed2b47c364f" id=d7a3d3c0-89c8-443d-926a-bf8921e3011d name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.776440067Z" level=info msg="Started container" PID=3331 containerID=7d6861325db2a3ac5ccac816c8e37a3daba04c6ecb4c268229d0eed2b47c364f description=kube-system/storage-provisioner/storage-provisioner id=d7a3d3c0-89c8-443d-926a-bf8921e3011d name=/runtime.v1.RuntimeService/StartContainer sandboxID=c464fbce01c73bc9002a59a55e969a9dcc96c829129ee9c487d0762b3a2a4169
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.362057944Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.366564465Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.366589659Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.366607882Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.370444341Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.370467276Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.370484152Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.374344046Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.374374846Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.374396298Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.378400072Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.378429166Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.378444369Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.382115308Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.382141451Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	7d6861325db2a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           9 minutes ago       Running             storage-provisioner                    1                   c464fbce01c73       storage-provisioner                                     kube-system
	5935e257f3a09       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              9 minutes ago       Running             kubernetes-dashboard-auth              0                   d0d6b23f0e1dc       kubernetes-dashboard-auth-bf9cfccb5-mrw8q               kubernetes-dashboard
	29fec7f14635a       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   9 minutes ago       Running             kubernetes-dashboard-metrics-scraper   0                   0e0159aebbb3f       kubernetes-dashboard-metrics-scraper-594bbfb84b-ncxlk   kubernetes-dashboard
	94493b4e71313       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                           9 minutes ago       Running             proxy                                  0                   a29ecbc685444       kubernetes-dashboard-kong-78b7499b45-z266g              kubernetes-dashboard
	0c57b1705660a       docker.io/library/kong@sha256:73ac10ce4d2c5b3b8b4acd6c8117b4e72d1a201d95be2d51aeae8324d776a108                             9 minutes ago       Exited              clear-stale-pid                        0                   a29ecbc685444       kubernetes-dashboard-kong-78b7499b45-z266g              kubernetes-dashboard
	bba0b0d89d520       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               9 minutes ago       Running             kubernetes-dashboard-web               0                   8dedb4931ab92       kubernetes-dashboard-web-7f7574785f-h2jf5               kubernetes-dashboard
	d438e50bdc5cf       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               9 minutes ago       Running             kubernetes-dashboard-api               0                   2d9da507d045f       kubernetes-dashboard-api-c7898775-zhmv8                 kubernetes-dashboard
	88f8999e01d5b       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                           9 minutes ago       Running             coredns                                0                   192133b79d756       coredns-7d764666f9-vj7lm                                kube-system
	53f1be74e873d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           9 minutes ago       Exited              storage-provisioner                    0                   c464fbce01c73       storage-provisioner                                     kube-system
	bf4ed13bede99       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                                           9 minutes ago       Running             busybox                                1                   1a93d07c85274       busybox                                                 default
	98dcabe770e7d       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                                           9 minutes ago       Running             kindnet-cni                            0                   c96cb5fa17a00       kindnet-xrp2s                                           kube-system
	757ccd2caa9cd       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                                           9 minutes ago       Running             kube-proxy                             0                   4e59b01d6de99       kube-proxy-g2gm4                                        kube-system
	5f148a7e487d8       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                                           9 minutes ago       Running             etcd                                   0                   03f900ecc7129       etcd-no-preload-278042                                  kube-system
	001407ac1b909       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                                           9 minutes ago       Running             kube-controller-manager                0                   d44cf856d1c8b       kube-controller-manager-no-preload-278042               kube-system
	973ccccab2576       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                                           9 minutes ago       Running             kube-scheduler                         0                   3f68017fcfb0f       kube-scheduler-no-preload-278042                        kube-system
	821b9cbc72eb6       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                                           9 minutes ago       Running             kube-apiserver                         0                   46991eb1a5abd       kube-apiserver-no-preload-278042                        kube-system
	
	
	==> coredns [88f8999e01d5bc23ebc968525542d039ae5c65ebd88f7ecad360345dc8277d94] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:57319 - 34037 "HINFO IN 3016703752619529984.3565104935656887276. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019206295s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-278042
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-278042
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=no-preload-278042
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_04_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:04:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-278042
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:14:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:14:11 +0000   Fri, 19 Dec 2025 03:04:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:14:11 +0000   Fri, 19 Dec 2025 03:04:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:14:11 +0000   Fri, 19 Dec 2025 03:04:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:14:11 +0000   Fri, 19 Dec 2025 03:04:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-278042
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                8fbc19b8-72f7-4938-83d9-fc3015dde7d1
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7d764666f9-vj7lm                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-no-preload-278042                                   100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-xrp2s                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-no-preload-278042                         250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-no-preload-278042                200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-g2gm4                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-no-preload-278042                         100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-api-c7898775-zhmv8                  100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9m36s
	  kubernetes-dashboard        kubernetes-dashboard-auth-bf9cfccb5-mrw8q                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9m36s
	  kubernetes-dashboard        kubernetes-dashboard-kong-78b7499b45-z266g               0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m36s
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-594bbfb84b-ncxlk    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9m36s
	  kubernetes-dashboard        kubernetes-dashboard-web-7f7574785f-h2jf5                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1250m (15%)  1100m (13%)
	  memory             1020Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  10m    node-controller  Node no-preload-278042 event: Registered Node no-preload-278042 in Controller
	  Normal  RegisteredNode  9m38s  node-controller  Node no-preload-278042 event: Registered Node no-preload-278042 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [5f148a7e487d8dd3e55b516e027c5d508a26bdfe82dca079a9e980052c56ba2a] <==
	{"level":"info","ts":"2025-12-19T03:05:08.314592Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-19T03:05:08.316078Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-19T03:05:08.314662Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-19T03:05:08.316205Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-19T03:05:08.316150Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-19T03:05:08.315072Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-19T03:05:08.315130Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-19T03:05:08.988344Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-19T03:05:08.988403Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-19T03:05:08.988503Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-19T03:05:08.988524Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:05:08.988542Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-19T03:05:08.989244Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-19T03:05:08.989319Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:05:08.989346Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-19T03:05:08.989356Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-19T03:05:08.990632Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:no-preload-278042 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-19T03:05:08.990634Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:05:08.990681Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:05:08.990883Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T03:05:08.991615Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T03:05:08.992858Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:05:08.993684Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:05:09.001234Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T03:05:09.001416Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 03:14:51 up 57 min,  0 user,  load average: 0.58, 1.09, 1.75
	Linux no-preload-278042 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [98dcabe770e7dcf718bfbc7938b663e3dd19fd9ad86c2bd261a4099febad9b1b] <==
	I1219 03:12:51.361020       1 main.go:301] handling current node
	I1219 03:13:01.360329       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:13:01.360373       1 main.go:301] handling current node
	I1219 03:13:11.365365       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:13:11.365406       1 main.go:301] handling current node
	I1219 03:13:21.367847       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:13:21.367877       1 main.go:301] handling current node
	I1219 03:13:31.360941       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:13:31.360980       1 main.go:301] handling current node
	I1219 03:13:41.365930       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:13:41.365970       1 main.go:301] handling current node
	I1219 03:13:51.363420       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:13:51.363463       1 main.go:301] handling current node
	I1219 03:14:01.360752       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:14:01.360783       1 main.go:301] handling current node
	I1219 03:14:11.360222       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:14:11.360263       1 main.go:301] handling current node
	I1219 03:14:21.360311       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:14:21.360345       1 main.go:301] handling current node
	I1219 03:14:31.369015       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:14:31.369048       1 main.go:301] handling current node
	I1219 03:14:41.367805       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:14:41.367836       1 main.go:301] handling current node
	I1219 03:14:51.364879       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:14:51.364911       1 main.go:301] handling current node
	
	
	==> kube-apiserver [821b9cbc72eb687248584e5b24c8fee86a84d2d127bb4360eeb9e0bc0f2c67ec] <==
	I1219 03:05:13.288571       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	W1219 03:05:13.385125       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.401923       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.413483       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.423560       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.434652       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.450356       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.470070       1 logging.go:55] [core] [Channel #283 SubChannel #284]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.481151       1 logging.go:55] [core] [Channel #287 SubChannel #288]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.492407       1 logging.go:55] [core] [Channel #291 SubChannel #292]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.503960       1 logging.go:55] [core] [Channel #295 SubChannel #296]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.519221       1 logging.go:55] [core] [Channel #299 SubChannel #300]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.528090       1 logging.go:55] [core] [Channel #303 SubChannel #304]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1219 03:05:13.711310       1 controller.go:667] quota admission added evaluator for: endpoints
	I1219 03:05:13.761392       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:05:13.862098       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:05:13.961908       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 03:05:15.702973       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 03:05:15.771287       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:05:15.776040       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:05:15.788145       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.102.118.21"}
	I1219 03:05:15.795336       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.103.152.147"}
	I1219 03:05:15.798838       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.54.162"}
	I1219 03:05:15.807348       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.97.173.60"}
	I1219 03:05:15.813204       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.106.235.156"}
	
	
	==> kube-controller-manager [001407ac1b90920309b7c79710e785dd82d392c6a8a43c444682d0837efd8cae] <==
	I1219 03:05:13.463362       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463414       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463386       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1219 03:05:13.463438       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463465       1 range_allocator.go:177] "Sending events to api server"
	I1219 03:05:13.463505       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1219 03:05:13.463516       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:05:13.463521       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463634       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463681       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463711       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464012       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464187       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464219       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464367       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464376       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464393       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464411       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.472055       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:05:14.564522       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:14.564546       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1219 03:05:14.564553       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:14.564553       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1219 03:05:14.572694       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:14.581900       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [757ccd2caa9cd35651079514b95b85a3612146f0d5b17fa735322d1e2ee036f1] <==
	I1219 03:05:11.015248       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:05:11.078140       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:05:11.178544       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:11.178579       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1219 03:05:11.178664       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:05:11.202324       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:05:11.202395       1 server_linux.go:136] "Using iptables Proxier"
	I1219 03:05:11.207676       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:05:11.208164       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 03:05:11.208215       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:11.212272       1 config.go:200] "Starting service config controller"
	I1219 03:05:11.212297       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:05:11.212328       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:05:11.212333       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:05:11.212401       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:05:11.212410       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:05:11.212604       1 config.go:309] "Starting node config controller"
	I1219 03:05:11.212646       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:05:11.212671       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:05:11.313219       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:05:11.313270       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:05:11.313557       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [973ccccab25764a0f9dd88e6a5d1d5565060554451f1dcc3dd0e68a9aed4c9f2] <==
	I1219 03:05:08.762319       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:05:10.311124       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:05:10.311291       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:05:10.311314       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:05:10.311345       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:05:10.339015       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1219 03:05:10.339346       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:10.343655       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:10.343694       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:05:10.345418       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:05:10.347040       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:05:10.447312       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 19 03:10:20 no-preload-278042 kubelet[713]: E1219 03:10:20.563359     713 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-ncxlk" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:10:31 no-preload-278042 kubelet[713]: E1219 03:10:31.563466     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-278042" containerName="kube-apiserver"
	Dec 19 03:10:32 no-preload-278042 kubelet[713]: E1219 03:10:32.562655     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-278042" containerName="kube-scheduler"
	Dec 19 03:10:51 no-preload-278042 kubelet[713]: E1219 03:10:51.563449     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-z266g" containerName="proxy"
	Dec 19 03:11:03 no-preload-278042 kubelet[713]: E1219 03:11:03.563450     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vj7lm" containerName="coredns"
	Dec 19 03:11:22 no-preload-278042 kubelet[713]: E1219 03:11:22.563011     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-278042" containerName="etcd"
	Dec 19 03:11:30 no-preload-278042 kubelet[713]: E1219 03:11:30.563242     713 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-ncxlk" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:11:32 no-preload-278042 kubelet[713]: E1219 03:11:32.562639     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-278042" containerName="kube-controller-manager"
	Dec 19 03:11:33 no-preload-278042 kubelet[713]: E1219 03:11:33.563044     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-278042" containerName="kube-scheduler"
	Dec 19 03:11:59 no-preload-278042 kubelet[713]: E1219 03:11:59.562780     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-278042" containerName="kube-apiserver"
	Dec 19 03:12:11 no-preload-278042 kubelet[713]: E1219 03:12:11.562902     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-z266g" containerName="proxy"
	Dec 19 03:12:17 no-preload-278042 kubelet[713]: E1219 03:12:17.563046     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vj7lm" containerName="coredns"
	Dec 19 03:12:28 no-preload-278042 kubelet[713]: E1219 03:12:28.563365     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-278042" containerName="etcd"
	Dec 19 03:12:42 no-preload-278042 kubelet[713]: E1219 03:12:42.563333     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-278042" containerName="kube-controller-manager"
	Dec 19 03:12:57 no-preload-278042 kubelet[713]: E1219 03:12:57.563423     713 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-ncxlk" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:12:58 no-preload-278042 kubelet[713]: E1219 03:12:58.562595     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-278042" containerName="kube-scheduler"
	Dec 19 03:13:21 no-preload-278042 kubelet[713]: E1219 03:13:21.563027     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vj7lm" containerName="coredns"
	Dec 19 03:13:26 no-preload-278042 kubelet[713]: E1219 03:13:26.563224     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-278042" containerName="kube-apiserver"
	Dec 19 03:13:39 no-preload-278042 kubelet[713]: E1219 03:13:39.563230     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-z266g" containerName="proxy"
	Dec 19 03:13:55 no-preload-278042 kubelet[713]: E1219 03:13:55.563497     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-278042" containerName="etcd"
	Dec 19 03:14:01 no-preload-278042 kubelet[713]: E1219 03:14:01.563396     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-278042" containerName="kube-scheduler"
	Dec 19 03:14:10 no-preload-278042 kubelet[713]: E1219 03:14:10.563008     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-278042" containerName="kube-controller-manager"
	Dec 19 03:14:20 no-preload-278042 kubelet[713]: E1219 03:14:20.562733     713 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-ncxlk" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:14:39 no-preload-278042 kubelet[713]: E1219 03:14:39.562589     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-278042" containerName="kube-apiserver"
	Dec 19 03:14:50 no-preload-278042 kubelet[713]: E1219 03:14:50.562829     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vj7lm" containerName="coredns"
	
	
	==> kubernetes-dashboard [29fec7f14635a794f200efe276e62a0fc3151ea3d427cb21da297c53114fd8b9] <==
	10.244.0.1 - - [19/Dec/2025:03:12:15 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:12:17 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:12:25 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:12:35 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:12:45 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:12:47 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:12:55 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:13:05 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:13:15 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:13:17 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:13:25 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:13:35 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:13:45 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:13:47 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:13:55 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:14:05 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:14:15 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:14:17 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:14:25 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:14:35 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:14:45 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:14:47 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	E1219 03:12:25.194454       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:13:25.195410       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:14:25.194963       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	
	
	==> kubernetes-dashboard [5935e257f3a0964bf239f408c9308c3b84961c75f06f32b2fda50133fe1ddbbd] <==
	I1219 03:05:26.300513       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:05:26.300578       1 init.go:49] Using in-cluster config
	I1219 03:05:26.300723       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [bba0b0d89d520cc6ca6a07611a31a2778cb1e41e66784ac255b63f970adcffb7] <==
	I1219 03:05:19.397607       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:05:19.397662       1 init.go:48] Using in-cluster config
	I1219 03:05:19.397903       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [d438e50bdc5cf86c6ad101cf6a3ca9c6c7091524bb7ffd95705de1d1a5ed8994] <==
	I1219 03:05:17.224225       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:05:17.224299       1 init.go:49] Using in-cluster config
	I1219 03:05:17.224498       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:05:17.224512       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:05:17.224518       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:05:17.224524       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:05:17.230241       1 main.go:119] "Successful initial request to the apiserver" version="v1.35.0-rc.1"
	I1219 03:05:17.230266       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:05:17.233542       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 03:05:17.236374       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	I1219 03:05:47.240946       1 manager.go:101] Successful request to sidecar
	
	
	==> storage-provisioner [53f1be74e873df0c32c600b228ba909dde859aa38c23f9a71f536c90aa4e096f] <==
	I1219 03:05:10.950483       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:05:40.952323       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7d6861325db2a3ac5ccac816c8e37a3daba04c6ecb4c268229d0eed2b47c364f] <==
	W1219 03:14:27.263928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:29.266899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:29.271210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:31.274945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:31.279916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:33.282563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:33.286396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:35.289469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:35.294399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:37.297250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:37.300928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:39.304336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:39.308170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:41.311088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:41.315520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:43.318423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:43.324039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:45.328155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:45.332849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:47.336416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:47.339906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:49.343141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:49.348289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:51.351324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:14:51.355129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-278042 -n no-preload-278042
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-278042 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 03:06:33.600464    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:06:38.531588    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-805185 -n embed-certs-805185
start_stop_delete_test.go:272: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:15:32.553007491 +0000 UTC m=+3042.272526878
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-805185
helpers_test.go:244: (dbg) docker inspect embed-certs-805185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415",
	        "Created": "2025-12-19T03:04:41.634228453Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 350292,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:05:45.883197161Z",
	            "FinishedAt": "2025-12-19T03:05:44.649106592Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415/hostname",
	        "HostsPath": "/var/lib/docker/containers/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415/hosts",
	        "LogPath": "/var/lib/docker/containers/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415-json.log",
	        "Name": "/embed-certs-805185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-805185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-805185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415",
	                "LowerDir": "/var/lib/docker/overlay2/ed796c6e9d2f9edb740649c569172aeee5d6bffb367753798bf2544f5c8616e4-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ed796c6e9d2f9edb740649c569172aeee5d6bffb367753798bf2544f5c8616e4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ed796c6e9d2f9edb740649c569172aeee5d6bffb367753798bf2544f5c8616e4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ed796c6e9d2f9edb740649c569172aeee5d6bffb367753798bf2544f5c8616e4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-805185",
	                "Source": "/var/lib/docker/volumes/embed-certs-805185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-805185",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-805185",
	                "name.minikube.sigs.k8s.io": "embed-certs-805185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7457f8142accad01c6ab136b22c6fa80ee06dd20e79f2a84f99ffb94723b6308",
	            "SandboxKey": "/var/run/docker/netns/7457f8142acc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-805185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "67670b4143fc2c858529db8e9ece90091b3a7a00c5465943bbbbea83d055a550",
	                    "EndpointID": "a46e3becc7625d5ecd97a1cbfefeda9844ff31ce4ce29ae0c0d5c0cbe2af09be",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "d6:26:96:9c:9e:16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-805185",
	                        "c2b5f77a65ce"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-805185 -n embed-certs-805185
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-805185 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-805185 logs -n 25: (1.222073489s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-821749 sudo containerd config dump                                                                                                                                                                                          │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo crio config                                                                                                                                                                                                     │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ delete  │ -p custom-flannel-821749                                                                                                                                                                                                                      │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-433330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-278042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ delete  │ -p disable-driver-mounts-507648                                                                                                                                                                                                               │ disable-driver-mounts-507648 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ stop    │ -p old-k8s-version-433330 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-278042 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-433330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-433330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p no-preload-278042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-278042 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p embed-certs-805185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p embed-certs-805185 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-717222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-717222 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-805185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-717222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:05:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:05:53.092301  352121 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:05:53.092394  352121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:53.092398  352121 out.go:374] Setting ErrFile to fd 2...
	I1219 03:05:53.092402  352121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:53.092674  352121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:05:53.093206  352121 out.go:368] Setting JSON to false
	I1219 03:05:53.094527  352121 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2904,"bootTime":1766110649,"procs":396,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:05:53.094626  352121 start.go:143] virtualization: kvm guest
	I1219 03:05:53.096521  352121 out.go:179] * [default-k8s-diff-port-717222] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:05:53.097989  352121 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:05:53.098033  352121 notify.go:221] Checking for updates...
	I1219 03:05:53.100179  352121 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:05:53.101370  352121 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:53.102535  352121 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:05:53.104239  352121 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:05:53.105473  352121 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:05:53.107110  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:53.107760  352121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:05:53.140217  352121 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:05:53.140357  352121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:53.211137  352121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 03:05:53.198937136 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:53.211231  352121 docker.go:319] overlay module found
	I1219 03:05:53.212919  352121 out.go:179] * Using the docker driver based on existing profile
	I1219 03:05:53.214070  352121 start.go:309] selected driver: docker
	I1219 03:05:53.214085  352121 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:53.214190  352121 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:05:53.214819  352121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:53.291099  352121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 03:05:53.277402722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:53.291489  352121 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:53.291541  352121 cni.go:84] Creating CNI manager for ""
	I1219 03:05:53.291616  352121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:53.291670  352121 start.go:353] cluster config:
	{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:53.293313  352121 out.go:179] * Starting "default-k8s-diff-port-717222" primary control-plane node in "default-k8s-diff-port-717222" cluster
	I1219 03:05:53.300833  352121 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:05:53.301994  352121 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:05:53.303248  352121 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:05:53.303295  352121 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 03:05:53.303311  352121 cache.go:65] Caching tarball of preloaded images
	I1219 03:05:53.303351  352121 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:05:53.303428  352121 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:05:53.303442  352121 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 03:05:53.303571  352121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json ...
	I1219 03:05:53.330621  352121 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:05:53.330652  352121 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:05:53.330671  352121 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:05:53.330767  352121 start.go:360] acquireMachinesLock for default-k8s-diff-port-717222: {Name:mk586c44d14a94c58ce6de7426a02f7319eb0716 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:53.330860  352121 start.go:364] duration metric: took 49.392µs to acquireMachinesLock for "default-k8s-diff-port-717222"
	I1219 03:05:53.330886  352121 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:05:53.330893  352121 fix.go:54] fixHost starting: 
	I1219 03:05:53.331229  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:53.353305  352121 fix.go:112] recreateIfNeeded on default-k8s-diff-port-717222: state=Stopped err=<nil>
	W1219 03:05:53.353340  352121 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:05:52.784975  350034 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:52.785039  350034 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:05:52.785129  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.788781  350034 addons.go:239] Setting addon default-storageclass=true in "embed-certs-805185"
	W1219 03:05:52.788860  350034 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:05:52.788932  350034 host.go:66] Checking if "embed-certs-805185" exists ...
	I1219 03:05:52.789603  350034 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:52.803693  350034 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:52.803884  350034 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:05:52.803963  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.829487  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.833372  350034 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:52.833419  350034 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:05:52.833502  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.838747  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.863871  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.921723  350034 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:52.937553  350034 node_ready.go:35] waiting up to 6m0s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:52.960476  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:52.967329  350034 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:05:52.987994  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:54.352944  350034 node_ready.go:49] node "embed-certs-805185" is "Ready"
	I1219 03:05:54.352990  350034 node_ready.go:38] duration metric: took 1.41538432s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:54.353006  350034 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:54.353065  350034 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:05:54.864404  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.903892508s)
	I1219 03:05:54.864464  350034 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.897096827s)
	I1219 03:05:54.864523  350034 api_server.go:72] duration metric: took 2.115189515s to wait for apiserver process to appear ...
	I1219 03:05:54.864541  350034 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:05:54.864542  350034 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:54.864474  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.876456755s)
	I1219 03:05:54.864569  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:54.868475  350034 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:05:54.869335  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:54.869359  350034 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:55.365237  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:55.371297  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:55.371332  350034 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:53.354901  352121 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-717222" ...
	I1219 03:05:53.354983  352121 cli_runner.go:164] Run: docker start default-k8s-diff-port-717222
	I1219 03:05:53.699823  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:53.723885  352121 kic.go:430] container "default-k8s-diff-port-717222" state is running.
	I1219 03:05:53.724437  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:53.748911  352121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json ...
	I1219 03:05:53.749250  352121 machine.go:94] provisionDockerMachine start ...
	I1219 03:05:53.749328  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:53.779816  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:53.780159  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:53.780174  352121 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:05:53.780830  352121 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1219 03:05:56.928761  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-717222
	
	I1219 03:05:56.928788  352121 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-717222"
	I1219 03:05:56.928865  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:56.949113  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:56.949333  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:56.949347  352121 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-717222 && echo "default-k8s-diff-port-717222" | sudo tee /etc/hostname
	I1219 03:05:57.104743  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-717222
	
	I1219 03:05:57.104815  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.123728  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:57.124049  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:57.124072  352121 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-717222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-717222/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-717222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:05:57.273322  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:05:57.273360  352121 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 03:05:57.273404  352121 ubuntu.go:190] setting up certificates
	I1219 03:05:57.273418  352121 provision.go:84] configureAuth start
	I1219 03:05:57.273481  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:57.295306  352121 provision.go:143] copyHostCerts
	I1219 03:05:57.295369  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem, removing ...
	I1219 03:05:57.295386  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem
	I1219 03:05:57.295456  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 03:05:57.295579  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem, removing ...
	I1219 03:05:57.295589  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem
	I1219 03:05:57.295628  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 03:05:57.295782  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem, removing ...
	I1219 03:05:57.295804  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem
	I1219 03:05:57.295852  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 03:05:57.295936  352121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-717222 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-717222 localhost minikube]
	I1219 03:05:57.394982  352121 provision.go:177] copyRemoteCerts
	I1219 03:05:57.395055  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:05:57.395092  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.415801  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:57.518597  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:05:57.537101  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:05:57.555959  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:05:57.575813  352121 provision.go:87] duration metric: took 302.381293ms to configureAuth
	I1219 03:05:57.575844  352121 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:05:57.576048  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:57.576149  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.595953  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:57.596182  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:57.596200  352121 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:05:58.066496  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:05:58.066522  352121 machine.go:97] duration metric: took 4.317251418s to provisionDockerMachine
	I1219 03:05:58.066537  352121 start.go:293] postStartSetup for "default-k8s-diff-port-717222" (driver="docker")
	I1219 03:05:58.066550  352121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:05:58.066636  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:05:58.066688  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.089735  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:55.753357  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:05:55.865655  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:55.871156  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1219 03:05:55.872281  350034 api_server.go:141] control plane version: v1.34.3
	I1219 03:05:55.872314  350034 api_server.go:131] duration metric: took 1.007762314s to wait for apiserver health ...
	I1219 03:05:55.872322  350034 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:55.876404  350034 system_pods.go:59] 8 kube-system pods found
	I1219 03:05:55.876440  350034 system_pods.go:61] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:55.876448  350034 system_pods.go:61] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:55.876455  350034 system_pods.go:61] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:55.876464  350034 system_pods.go:61] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:55.876469  350034 system_pods.go:61] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:55.876474  350034 system_pods.go:61] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:55.876484  350034 system_pods.go:61] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:55.876491  350034 system_pods.go:61] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:55.876501  350034 system_pods.go:74] duration metric: took 4.173475ms to wait for pod list to return data ...
	I1219 03:05:55.876508  350034 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:55.879480  350034 default_sa.go:45] found service account: "default"
	I1219 03:05:55.879505  350034 default_sa.go:55] duration metric: took 2.991473ms for default service account to be created ...
	I1219 03:05:55.879514  350034 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:55.882762  350034 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:55.882799  350034 system_pods.go:89] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:55.882811  350034 system_pods.go:89] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:55.882822  350034 system_pods.go:89] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:55.882831  350034 system_pods.go:89] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:55.882844  350034 system_pods.go:89] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:55.882860  350034 system_pods.go:89] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:55.882873  350034 system_pods.go:89] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:55.882885  350034 system_pods.go:89] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:55.882898  350034 system_pods.go:126] duration metric: took 3.377076ms to wait for k8s-apps to be running ...
	I1219 03:05:55.882911  350034 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:55.882965  350034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:58.664447  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (2.910960851s)
	I1219 03:05:58.664545  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:58.664639  350034 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.781656633s)
	I1219 03:05:58.664656  350034 system_svc.go:56] duration metric: took 2.781742979s WaitForService to wait for kubelet
	I1219 03:05:58.664665  350034 kubeadm.go:587] duration metric: took 5.915331625s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:58.664687  350034 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:58.675538  350034 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:58.675644  350034 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:58.675679  350034 node_conditions.go:105] duration metric: took 10.98619ms to run NodePressure ...
	I1219 03:05:58.675732  350034 start.go:242] waiting for startup goroutines ...
	I1219 03:05:58.853716  350034 addons.go:500] Verifying addon dashboard=true in "embed-certs-805185"
	I1219 03:05:58.854075  350034 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:58.881488  350034 out.go:179] * Verifying dashboard addon...
	I1219 03:05:58.197271  352121 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:05:58.201133  352121 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:05:58.201178  352121 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:05:58.201190  352121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 03:05:58.201247  352121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 03:05:58.201317  352121 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem -> 85362.pem in /etc/ssl/certs
	I1219 03:05:58.201402  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:05:58.208949  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:05:58.226900  352121 start.go:296] duration metric: took 160.349121ms for postStartSetup
	I1219 03:05:58.227004  352121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:05:58.227051  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.247047  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.350362  352121 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:05:58.356036  352121 fix.go:56] duration metric: took 5.025136862s for fixHost
	I1219 03:05:58.356065  352121 start.go:83] releasing machines lock for "default-k8s-diff-port-717222", held for 5.025188602s
	I1219 03:05:58.356141  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:58.375473  352121 ssh_runner.go:195] Run: cat /version.json
	I1219 03:05:58.375532  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.375568  352121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:05:58.375641  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.397142  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.397516  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.556811  352121 ssh_runner.go:195] Run: systemctl --version
	I1219 03:05:58.565677  352121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:05:58.614666  352121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:05:58.621187  352121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:05:58.621270  352121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:05:58.633014  352121 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:05:58.633047  352121 start.go:496] detecting cgroup driver to use...
	I1219 03:05:58.633088  352121 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:05:58.633137  352121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:05:58.657800  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:05:58.678804  352121 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:05:58.678883  352121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:05:58.705625  352121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:05:58.728926  352121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:05:58.831932  352121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:05:58.929486  352121 docker.go:234] disabling docker service ...
	I1219 03:05:58.929541  352121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:05:58.944770  352121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:05:58.958306  352121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:05:59.062259  352121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:05:59.150570  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:05:59.163650  352121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:05:59.178297  352121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:05:59.178363  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.187297  352121 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 03:05:59.187364  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.198433  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.208772  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.217632  352121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:05:59.226347  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.236018  352121 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.245884  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.255295  352121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:05:59.263398  352121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:05:59.271671  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:59.372273  352121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:05:59.546574  352121 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:05:59.546667  352121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:05:59.552176  352121 start.go:564] Will wait 60s for crictl version
	I1219 03:05:59.552257  352121 ssh_runner.go:195] Run: which crictl
	I1219 03:05:59.557121  352121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:05:59.591016  352121 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 03:05:59.591106  352121 ssh_runner.go:195] Run: crio --version
	I1219 03:05:59.628329  352121 ssh_runner.go:195] Run: crio --version
	I1219 03:05:59.666833  352121 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1219 03:05:58.884349  350034 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:05:58.887370  350034 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:05:58.887396  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.389654  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.888847  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.388430  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.668307  352121 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-717222 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:05:59.692145  352121 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1219 03:05:59.697760  352121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:59.712259  352121 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:05:59.712414  352121 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:05:59.712469  352121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:59.758952  352121 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:59.758977  352121 crio.go:433] Images already preloaded, skipping extraction
	I1219 03:05:59.759041  352121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:59.795008  352121 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:59.795037  352121 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:05:59.795046  352121 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.3 crio true true} ...
	I1219 03:05:59.795178  352121 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-717222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:05:59.795259  352121 ssh_runner.go:195] Run: crio config
	I1219 03:05:59.864102  352121 cni.go:84] Creating CNI manager for ""
	I1219 03:05:59.864128  352121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:59.864150  352121 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:05:59.864179  352121 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-717222 NodeName:default-k8s-diff-port-717222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:05:59.864362  352121 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-717222"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:05:59.864462  352121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:05:59.875851  352121 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:05:59.875922  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:05:59.884464  352121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1219 03:05:59.899134  352121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:05:59.915215  352121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1219 03:05:59.929131  352121 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:05:59.933193  352121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:59.944310  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:06:00.032535  352121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:06:00.050232  352121 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222 for IP: 192.168.94.2
	I1219 03:06:00.050255  352121 certs.go:195] generating shared ca certs ...
	I1219 03:06:00.050283  352121 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.050682  352121 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 03:06:00.050862  352121 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 03:06:00.050911  352121 certs.go:257] generating profile certs ...
	I1219 03:06:00.051065  352121 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/client.key
	I1219 03:06:00.051146  352121 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.key.a5493f25
	I1219 03:06:00.051208  352121 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.key
	I1219 03:06:00.051349  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem (1338 bytes)
	W1219 03:06:00.051384  352121 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536_empty.pem, impossibly tiny 0 bytes
	I1219 03:06:00.051393  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:06:00.051430  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:06:00.051457  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:06:00.051489  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 03:06:00.051550  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:06:00.053211  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:06:00.075839  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:06:00.100173  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:06:00.124618  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 03:06:00.149956  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:06:00.170353  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:06:00.189275  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:06:00.209284  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 03:06:00.228628  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:06:00.250172  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem --> /usr/share/ca-certificates/8536.pem (1338 bytes)
	I1219 03:06:00.275836  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /usr/share/ca-certificates/85362.pem (1708 bytes)
	I1219 03:06:00.299659  352121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:06:00.316102  352121 ssh_runner.go:195] Run: openssl version
	I1219 03:06:00.322905  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.332781  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:06:00.343197  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.348016  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.348075  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.393419  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:06:00.402299  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.411729  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8536.pem /etc/ssl/certs/8536.pem
	I1219 03:06:00.420351  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.424769  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:33 /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.424832  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.477146  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:06:00.485938  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.495504  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85362.pem /etc/ssl/certs/85362.pem
	I1219 03:06:00.504981  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.509807  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:33 /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.509870  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.553357  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:06:00.563012  352121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:06:00.568396  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:06:00.622821  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:06:00.682127  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:06:00.745366  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:06:00.806418  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:06:00.854164  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:06:00.896520  352121 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:06:00.896640  352121 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:06:00.896757  352121 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:06:00.932488  352121 cri.go:92] found id: "1340a2f59347d94cb67897c14b11276bb491943caa3b94ed7080db25e5d5dc78"
	I1219 03:06:00.932513  352121 cri.go:92] found id: "725faee3812c5d3136b764571b2c9b270517680beb590d21de31aad0b5d0b89a"
	I1219 03:06:00.932519  352121 cri.go:92] found id: "d2c496c53c69616e93f161ca59c69ebaa3d4de81e94460893270a29e321886eb"
	I1219 03:06:00.932523  352121 cri.go:92] found id: "0fb4e8910a64fc9fe34f2319ccb58566695768a8c137e01b391e3004481cb992"
	I1219 03:06:00.932538  352121 cri.go:92] found id: ""
	I1219 03:06:00.932585  352121 ssh_runner.go:195] Run: sudo runc list -f json
	W1219 03:06:00.949831  352121 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:06:00Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:06:00.949907  352121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:06:00.960453  352121 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:06:00.960473  352121 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:06:00.960520  352121 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:06:00.969747  352121 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:06:00.971295  352121 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-717222" does not appear in /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:06:00.972255  352121 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-4987/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-717222" cluster setting kubeconfig missing "default-k8s-diff-port-717222" context setting]
	I1219 03:06:00.973245  352121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.975092  352121 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:06:00.986333  352121 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1219 03:06:00.986370  352121 kubeadm.go:602] duration metric: took 25.891084ms to restartPrimaryControlPlane
	I1219 03:06:00.986381  352121 kubeadm.go:403] duration metric: took 89.870495ms to StartCluster
	I1219 03:06:00.986399  352121 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.986465  352121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:06:00.988984  352121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.989885  352121 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:06:00.990020  352121 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:06:00.990126  352121 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990162  352121 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-717222"
	W1219 03:06:00.990171  352121 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:06:00.990156  352121 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990202  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:00.990217  352121 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-717222"
	W1219 03:06:00.990229  352121 addons.go:248] addon dashboard should already be in state true
	I1219 03:06:00.990249  352121 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990136  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:06:00.990260  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:00.990288  352121 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-717222"
	I1219 03:06:00.990658  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.990728  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.990961  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.993354  352121 out.go:179] * Verifying Kubernetes components...
	I1219 03:06:00.994888  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:06:01.021617  352121 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:06:01.021640  352121 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:06:01.021711  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.025911  352121 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:06:01.027271  352121 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:06:01.027318  352121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:06:01.027479  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.027961  352121 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-717222"
	W1219 03:06:01.027983  352121 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:06:01.028009  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:01.028455  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:01.058853  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.066205  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.072839  352121 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:06:01.072864  352121 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:06:01.072921  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.100032  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.171523  352121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:06:01.187709  352121 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:06:01.188379  352121 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:06:01.193383  352121 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:06:01.197836  352121 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:06:01.203342  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:06:01.235359  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:06:02.386268  352121 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.188390942s)
	I1219 03:06:02.386368  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:06:03.037231  352121 node_ready.go:49] node "default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:03.037269  352121 node_ready.go:38] duration metric: took 1.848860443s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:06:03.037285  352121 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:06:03.037340  352121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:06:00.888463  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.390478  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.889172  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.392135  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.887803  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.395356  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.887597  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.388564  350034 kapi.go:107] duration metric: took 5.504214625s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:06:04.390500  350034 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-805185 addons enable metrics-server
	
	I1219 03:06:04.392729  350034 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1219 03:06:04.394312  350034 addons.go:546] duration metric: took 11.644922855s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:06:04.394358  350034 start.go:247] waiting for cluster config update ...
	I1219 03:06:04.394374  350034 start.go:256] writing updated cluster config ...
	I1219 03:06:04.394679  350034 ssh_runner.go:195] Run: rm -f paused
	I1219 03:06:04.399689  350034 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:04.404337  350034 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:03.829353  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.625958762s)
	I1219 03:06:03.829435  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.59404762s)
	I1219 03:06:06.147413  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.761002493s)
	I1219 03:06:06.147503  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:06:06.147635  352121 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.110275887s)
	I1219 03:06:06.147788  352121 api_server.go:72] duration metric: took 5.157862117s to wait for apiserver process to appear ...
	I1219 03:06:06.147803  352121 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:06:06.147822  352121 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:06:06.158489  352121 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1219 03:06:06.160663  352121 api_server.go:141] control plane version: v1.34.3
	I1219 03:06:06.160691  352121 api_server.go:131] duration metric: took 12.881794ms to wait for apiserver health ...
	I1219 03:06:06.160726  352121 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:06:06.167594  352121 system_pods.go:59] 8 kube-system pods found
	I1219 03:06:06.167645  352121 system_pods.go:61] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:06:06.167662  352121 system_pods.go:61] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:06:06.167671  352121 system_pods.go:61] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:06:06.167680  352121 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:06:06.167689  352121 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:06:06.167695  352121 system_pods.go:61] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:06:06.167733  352121 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:06:06.167740  352121 system_pods.go:61] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:06:06.167750  352121 system_pods.go:74] duration metric: took 7.015044ms to wait for pod list to return data ...
	I1219 03:06:06.167763  352121 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:06:06.172355  352121 default_sa.go:45] found service account: "default"
	I1219 03:06:06.172383  352121 default_sa.go:55] duration metric: took 4.612941ms for default service account to be created ...
	I1219 03:06:06.172394  352121 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:06:06.266672  352121 system_pods.go:86] 8 kube-system pods found
	I1219 03:06:06.266726  352121 system_pods.go:89] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:06:06.266739  352121 system_pods.go:89] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:06:06.266748  352121 system_pods.go:89] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:06:06.266766  352121 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:06:06.266780  352121 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:06:06.266794  352121 system_pods.go:89] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:06:06.266807  352121 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:06:06.266814  352121 system_pods.go:89] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:06:06.266828  352121 system_pods.go:126] duration metric: took 94.426368ms to wait for k8s-apps to be running ...
	I1219 03:06:06.266837  352121 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:06:06.266897  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:06:06.349488  352121 system_svc.go:56] duration metric: took 82.641061ms WaitForService to wait for kubelet
	I1219 03:06:06.349525  352121 kubeadm.go:587] duration metric: took 5.359602561s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:06:06.349549  352121 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:06:06.350103  352121 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-717222"
	I1219 03:06:06.350491  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:06.365789  352121 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:06:06.365826  352121 node_conditions.go:123] node cpu capacity is 8
	I1219 03:06:06.365844  352121 node_conditions.go:105] duration metric: took 16.288389ms to run NodePressure ...
	I1219 03:06:06.365861  352121 start.go:242] waiting for startup goroutines ...
	I1219 03:06:06.386126  352121 out.go:179] * Verifying dashboard addon...
	I1219 03:06:06.389114  352121 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:06:06.393439  352121 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:06:07.395667  352121 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:06:07.395691  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.895613  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:06:06.411012  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:08.413428  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:08.393607  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.894323  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.395092  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.892933  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.393259  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.894360  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.394632  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.893492  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.393217  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.892698  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.423636  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.893633  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.392860  352121 kapi.go:107] duration metric: took 8.003743298s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:06:14.394678  352121 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-717222 addons enable metrics-server
	
	I1219 03:06:14.396056  352121 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1219 03:06:10.911668  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:13.410917  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:15.411844  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:14.397218  352121 addons.go:546] duration metric: took 13.407198611s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:06:14.397266  352121 start.go:247] waiting for cluster config update ...
	I1219 03:06:14.397281  352121 start.go:256] writing updated cluster config ...
	I1219 03:06:14.397606  352121 ssh_runner.go:195] Run: rm -f paused
	I1219 03:06:14.402992  352121 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:14.409994  352121 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	W1219 03:06:16.416124  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:17.411983  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:19.909949  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:18.416198  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:20.916272  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:21.910435  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:24.410080  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:23.415501  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:25.416077  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:27.916104  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:26.910790  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:29.410901  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:30.410731  350034 pod_ready.go:94] pod "coredns-66bc5c9577-8gphx" is "Ready"
	I1219 03:06:30.410767  350034 pod_ready.go:86] duration metric: took 26.00640693s for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.413681  350034 pod_ready.go:83] waiting for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.418650  350034 pod_ready.go:94] pod "etcd-embed-certs-805185" is "Ready"
	I1219 03:06:30.418674  350034 pod_ready.go:86] duration metric: took 4.958889ms for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.420801  350034 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.424880  350034 pod_ready.go:94] pod "kube-apiserver-embed-certs-805185" is "Ready"
	I1219 03:06:30.424905  350034 pod_ready.go:86] duration metric: took 4.082079ms for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.427435  350034 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.608611  350034 pod_ready.go:94] pod "kube-controller-manager-embed-certs-805185" is "Ready"
	I1219 03:06:30.608639  350034 pod_ready.go:86] duration metric: took 181.178673ms for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.808837  350034 pod_ready.go:83] waiting for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.208761  350034 pod_ready.go:94] pod "kube-proxy-p8pqg" is "Ready"
	I1219 03:06:31.208794  350034 pod_ready.go:86] duration metric: took 399.926497ms for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.408111  350034 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.808631  350034 pod_ready.go:94] pod "kube-scheduler-embed-certs-805185" is "Ready"
	I1219 03:06:31.808661  350034 pod_ready.go:86] duration metric: took 400.520634ms for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.808676  350034 pod_ready.go:40] duration metric: took 27.408945135s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:31.854031  350034 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:06:31.855666  350034 out.go:179] * Done! kubectl is now configured to use "embed-certs-805185" cluster and "default" namespace by default
	W1219 03:06:29.916349  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:31.916589  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:34.416790  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:36.915948  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	I1219 03:06:37.416133  352121 pod_ready.go:94] pod "coredns-66bc5c9577-dskxl" is "Ready"
	I1219 03:06:37.416166  352121 pod_ready.go:86] duration metric: took 23.006142155s for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.418804  352121 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.423037  352121 pod_ready.go:94] pod "etcd-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.423066  352121 pod_ready.go:86] duration metric: took 4.235839ms for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.425227  352121 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.429099  352121 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.429133  352121 pod_ready.go:86] duration metric: took 3.885497ms for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.430894  352121 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.614208  352121 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.614234  352121 pod_ready.go:86] duration metric: took 183.319695ms for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.814884  352121 pod_ready.go:83] waiting for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.214236  352121 pod_ready.go:94] pod "kube-proxy-mr7c8" is "Ready"
	I1219 03:06:38.214264  352121 pod_ready.go:86] duration metric: took 399.351737ms for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.413883  352121 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.813953  352121 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:38.813980  352121 pod_ready.go:86] duration metric: took 400.070957ms for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.813992  352121 pod_ready.go:40] duration metric: took 24.410965975s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:38.857548  352121 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:06:38.860155  352121 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-717222" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 03:06:10 embed-certs-805185 crio[571]: time="2025-12-19T03:06:10.472463868Z" level=info msg="Created container d14c5a7b642f85cbc69aef96f7b439f2c6a873edd84fe53cafdbf19ba613e885: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf/clear-stale-pid" id=36313b84-f615-418e-a0c2-1800c7ad9bba name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:10 embed-certs-805185 crio[571]: time="2025-12-19T03:06:10.473232027Z" level=info msg="Starting container: d14c5a7b642f85cbc69aef96f7b439f2c6a873edd84fe53cafdbf19ba613e885" id=fa0d9c25-58bb-41e7-a751-32c0a5ee2072 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:10 embed-certs-805185 crio[571]: time="2025-12-19T03:06:10.475578796Z" level=info msg="Started container" PID=1981 containerID=d14c5a7b642f85cbc69aef96f7b439f2c6a873edd84fe53cafdbf19ba613e885 description=kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf/clear-stale-pid id=fa0d9c25-58bb-41e7-a751-32c0a5ee2072 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4243852a152c440419680bef0dfbf6f37d15c21f97f0c7059823f1520f9fc99c
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.135352218Z" level=info msg="Checking image status: kong:3.9" id=b06c69a2-5538-434a-8a72-4f2b223b8bfe name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.135542093Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.137747838Z" level=info msg="Checking image status: kong:3.9" id=9a4a1d08-b9e8-4169-83f7-aec209f5e0b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.13786748Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.142013294Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf/proxy" id=de23cef6-56d6-4c7e-a45c-c26d931492ea name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.142148287Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.148827695Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.149609559Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.189335726Z" level=info msg="Created container 20beadfa950bfa82589018450b7e9a01380f66f6a9eae32a9ce629a265cd5ad2: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf/proxy" id=de23cef6-56d6-4c7e-a45c-c26d931492ea name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.190165238Z" level=info msg="Starting container: 20beadfa950bfa82589018450b7e9a01380f66f6a9eae32a9ce629a265cd5ad2" id=b6a1bf8a-ce95-4b10-adea-c7af131524c4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.192808924Z" level=info msg="Started container" PID=1991 containerID=20beadfa950bfa82589018450b7e9a01380f66f6a9eae32a9ce629a265cd5ad2 description=kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf/proxy id=b6a1bf8a-ce95-4b10-adea-c7af131524c4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4243852a152c440419680bef0dfbf6f37d15c21f97f0c7059823f1520f9fc99c
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.183170694Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=084cd7a4-6ece-4c0a-8397-94465f3314df name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.184121665Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4d531b84-18eb-47e0-aad8-61f09bca340d name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.185241228Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=156d4e77-f8f4-4dbd-a7c4-e7b60cc39d38 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.18538707Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.189952355Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.190095237Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/77220e3b053f54d12f604ca801e81ba41d5ddbdc6900f8b55f5f4338438a1241/merged/etc/passwd: no such file or directory"
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.190117712Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/77220e3b053f54d12f604ca801e81ba41d5ddbdc6900f8b55f5f4338438a1241/merged/etc/group: no such file or directory"
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.190333672Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.231341429Z" level=info msg="Created container 3d7dd245b233f1e33bd4f191102b08c020335fd06eb801c42d94c75e88488904: kube-system/storage-provisioner/storage-provisioner" id=156d4e77-f8f4-4dbd-a7c4-e7b60cc39d38 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.232031749Z" level=info msg="Starting container: 3d7dd245b233f1e33bd4f191102b08c020335fd06eb801c42d94c75e88488904" id=26b83eef-8d30-46ec-89bd-a94e4e05ed3b name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.234124046Z" level=info msg="Started container" PID=3409 containerID=3d7dd245b233f1e33bd4f191102b08c020335fd06eb801c42d94c75e88488904 description=kube-system/storage-provisioner/storage-provisioner id=26b83eef-8d30-46ec-89bd-a94e4e05ed3b name=/runtime.v1.RuntimeService/StartContainer sandboxID=0c1876caf93065afdf67bc083a0b6fc921040c35760414f728f15ba554180160
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	3d7dd245b233f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           9 minutes ago       Running             storage-provisioner                    1                   0c1876caf9306       storage-provisioner                                     kube-system
	20beadfa950bf       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                           9 minutes ago       Running             proxy                                  0                   4243852a152c4       kubernetes-dashboard-kong-9849c64bd-9p6zf               kubernetes-dashboard
	d14c5a7b642f8       docker.io/library/kong@sha256:73ac10ce4d2c5b3b8b4acd6c8117b4e72d1a201d95be2d51aeae8324d776a108                             9 minutes ago       Exited              clear-stale-pid                        0                   4243852a152c4       kubernetes-dashboard-kong-9849c64bd-9p6zf               kubernetes-dashboard
	a0449cd056863       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              9 minutes ago       Running             kubernetes-dashboard-auth              0                   db4923db488cf       kubernetes-dashboard-auth-658884f98f-455ns              kubernetes-dashboard
	95cc887c80866       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               9 minutes ago       Running             kubernetes-dashboard-web               0                   4037dc076fb10       kubernetes-dashboard-web-5c9f966b98-gfhnn               kubernetes-dashboard
	310b39bacccab       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   9 minutes ago       Running             kubernetes-dashboard-metrics-scraper   0                   0be0ce9f85847       kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr   kubernetes-dashboard
	5b4f781150596       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               9 minutes ago       Running             kubernetes-dashboard-api               0                   5af5195e34c00       kubernetes-dashboard-api-78bc857d5c-fljnp               kubernetes-dashboard
	37fd60f84cab5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                           9 minutes ago       Running             coredns                                0                   f0f30eba64edf       coredns-66bc5c9577-8gphx                                kube-system
	e8ff222bdb55d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                                           9 minutes ago       Running             busybox                                1                   523d107bc5d8f       busybox                                                 default
	3e6a9f16432bb       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                           9 minutes ago       Running             kube-proxy                             0                   4fb4de09d3b1c       kube-proxy-p8pqg                                        kube-system
	3df3cb7877110       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           9 minutes ago       Exited              storage-provisioner                    0                   0c1876caf9306       storage-provisioner                                     kube-system
	9734264bc0316       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                                           9 minutes ago       Running             kindnet-cni                            0                   e566763b65b28       kindnet-jj9ms                                           kube-system
	dca8f84f406b7       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                           9 minutes ago       Running             kube-controller-manager                0                   1479078fc9c08       kube-controller-manager-embed-certs-805185              kube-system
	c0e9c22a25238       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                           9 minutes ago       Running             kube-scheduler                         0                   49e7ef6075ae3       kube-scheduler-embed-certs-805185                       kube-system
	e4f794af7924e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                           9 minutes ago       Running             etcd                                   0                   c8ef977665655       etcd-embed-certs-805185                                 kube-system
	fa9a88171fdc7       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                           9 minutes ago       Running             kube-apiserver                         0                   d92a0248993ee       kube-apiserver-embed-certs-805185                       kube-system
	
	
	==> coredns [37fd60f84cab5a40d06b06eda266df17eadd8d0a9ee56f7b235782087ec0083a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40097 - 29931 "HINFO IN 2735309851509519627.415811791505313667. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.415024708s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-805185
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-805185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=embed-certs-805185
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_04_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:04:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-805185
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:15:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:12:31 +0000   Fri, 19 Dec 2025 03:04:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:12:31 +0000   Fri, 19 Dec 2025 03:04:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:12:31 +0000   Fri, 19 Dec 2025 03:04:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:12:31 +0000   Fri, 19 Dec 2025 03:05:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-805185
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                e529c61b-35ad-4151-ab38-525026482d8c
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-8gphx                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-embed-certs-805185                                  100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-jj9ms                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-embed-certs-805185                        250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-embed-certs-805185               200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-p8pqg                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-embed-certs-805185                        100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-api-78bc857d5c-fljnp                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9m35s
	  kubernetes-dashboard        kubernetes-dashboard-auth-658884f98f-455ns               100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9m35s
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-9p6zf                0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m35s
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9m35s
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-gfhnn                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1250m (15%)  1100m (13%)
	  memory             1020Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m38s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node embed-certs-805185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node embed-certs-805185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node embed-certs-805185 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node embed-certs-805185 event: Registered Node embed-certs-805185 in Controller
	  Normal  NodeReady                10m                    kubelet          Node embed-certs-805185 status is now: NodeReady
	  Normal  Starting                 9m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m41s (x8 over 9m41s)  kubelet          Node embed-certs-805185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m41s (x8 over 9m41s)  kubelet          Node embed-certs-805185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m41s (x8 over 9m41s)  kubelet          Node embed-certs-805185 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m36s                  node-controller  Node embed-certs-805185 event: Registered Node embed-certs-805185 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [e4f794af7924e48700f3eb1f53c1070c15bc99d17539d5f097c1a7c62dded81f] <==
	{"level":"warn","ts":"2025-12-19T03:05:53.664914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.675237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.683067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.691097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.700439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.709666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.719221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.745613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.755575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.779584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.825911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.666523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.686420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.703183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.714636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.724682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.735837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.746037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.755589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.784157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.802436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.825473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58754","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T03:06:04.808381Z","caller":"traceutil/trace.go:172","msg":"trace[24513416] transaction","detail":"{read_only:false; response_revision:699; number_of_response:1; }","duration":"118.600036ms","start":"2025-12-19T03:06:04.689759Z","end":"2025-12-19T03:06:04.808359Z","steps":["trace[24513416] 'process raft request'  (duration: 118.551956ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:06:04.808596Z","caller":"traceutil/trace.go:172","msg":"trace[1604688651] transaction","detail":"{read_only:false; response_revision:698; number_of_response:1; }","duration":"178.640288ms","start":"2025-12-19T03:06:04.629933Z","end":"2025-12-19T03:06:04.808573Z","steps":["trace[1604688651] 'process raft request'  (duration: 128.977486ms)","trace[1604688651] 'compare'  (duration: 49.259539ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:06:10.029004Z","caller":"traceutil/trace.go:172","msg":"trace[1715983664] transaction","detail":"{read_only:false; response_revision:712; number_of_response:1; }","duration":"117.29944ms","start":"2025-12-19T03:06:09.911684Z","end":"2025-12-19T03:06:10.028983Z","steps":["trace[1715983664] 'process raft request'  (duration: 95.039156ms)","trace[1715983664] 'compare'  (duration: 21.881704ms)"],"step_count":2}
	
	
	==> kernel <==
	 03:15:33 up 58 min,  0 user,  load average: 0.37, 0.96, 1.68
	Linux embed-certs-805185 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9734264bc03165e973381a11181db3d0d85532eb608a1d648d545affcc0f5657] <==
	I1219 03:13:25.875676       1 main.go:301] handling current node
	I1219 03:13:35.873642       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:13:35.873677       1 main.go:301] handling current node
	I1219 03:13:45.868881       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:13:45.868927       1 main.go:301] handling current node
	I1219 03:13:55.867559       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:13:55.867620       1 main.go:301] handling current node
	I1219 03:14:05.875771       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:14:05.875803       1 main.go:301] handling current node
	I1219 03:14:15.868052       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:14:15.868102       1 main.go:301] handling current node
	I1219 03:14:25.875600       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:14:25.875642       1 main.go:301] handling current node
	I1219 03:14:35.875052       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:14:35.875091       1 main.go:301] handling current node
	I1219 03:14:45.870810       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:14:45.870847       1 main.go:301] handling current node
	I1219 03:14:55.867875       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:14:55.867920       1 main.go:301] handling current node
	I1219 03:15:05.874498       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:15:05.874534       1 main.go:301] handling current node
	I1219 03:15:15.872771       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:15:15.872803       1 main.go:301] handling current node
	I1219 03:15:25.868437       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:15:25.868467       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fa9a88171fdc75e01df96259a9096dab5e5ab76217553f36b6a9922f9e0f06fe] <==
	I1219 03:05:56.181694       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	W1219 03:05:57.666179       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 03:05:57.686342       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.703087       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.714554       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.724651       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 03:05:57.735825       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.745925       1 logging.go:55] [core] [Channel #283 SubChannel #284]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.755549       1 logging.go:55] [core] [Channel #287 SubChannel #288]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.773268       1 logging.go:55] [core] [Channel #291 SubChannel #292]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.784117       1 logging.go:55] [core] [Channel #295 SubChannel #296]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1219 03:05:57.795282       1 controller.go:667] quota admission added evaluator for: endpoints
	W1219 03:05:57.802417       1 logging.go:55] [core] [Channel #299 SubChannel #300]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.819295       1 logging.go:55] [core] [Channel #303 SubChannel #304]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1219 03:05:57.894304       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:05:57.991073       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:05:58.143944       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 03:05:58.544436       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 03:05:58.579983       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:05:58.584890       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:05:58.595427       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.101.245.250"}
	I1219 03:05:58.600356       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.97.48.46"}
	I1219 03:05:58.604096       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.96.197.102"}
	I1219 03:05:58.610018       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.99.175"}
	I1219 03:05:58.616775       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.106.250.73"}
	
	
	==> kube-controller-manager [dca8f84f406b7acd8227404694ece4fd29d232591939f26e4325c52e7c00de60] <==
	I1219 03:05:57.736964       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1219 03:05:57.737011       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1219 03:05:57.737131       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1219 03:05:57.737248       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1219 03:05:57.737588       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 03:05:57.737617       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1219 03:05:57.738742       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 03:05:57.738773       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 03:05:57.738742       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1219 03:05:57.744005       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1219 03:05:57.744039       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1219 03:05:57.744147       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1219 03:05:57.744203       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1219 03:05:57.744212       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1219 03:05:57.744220       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1219 03:05:57.746255       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1219 03:05:57.747424       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1219 03:05:57.753898       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1219 03:05:57.755198       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 03:05:58.841753       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:05:58.868581       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:05:58.874821       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:05:58.881981       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:05:58.882003       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 03:05:58.882012       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [3e6a9f16432bb2d0f57c9e657b776eaae753f9a9bc474bcd825b022f2cf4726b] <==
	I1219 03:05:55.448309       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:05:55.528222       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:05:55.628850       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:05:55.628898       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1219 03:05:55.629015       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:05:55.649512       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:05:55.649574       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:05:55.655220       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:05:55.655665       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:05:55.655695       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:55.657141       1 config.go:200] "Starting service config controller"
	I1219 03:05:55.657618       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:05:55.657697       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:05:55.657751       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:05:55.658014       1 config.go:309] "Starting node config controller"
	I1219 03:05:55.658027       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:05:55.658041       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:05:55.658491       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:05:55.658532       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:05:55.757856       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:05:55.759651       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:05:55.759720       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c0e9c22a2523807e95fb727795c040c95c5bd029feb66a6a92f7087e4503774e] <==
	I1219 03:05:53.750115       1 serving.go:386] Generated self-signed cert in-memory
	I1219 03:05:54.696153       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:05:54.696180       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:54.700571       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:54.700567       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1219 03:05:54.700623       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:54.700627       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1219 03:05:54.700603       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:05:54.700660       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:05:54.701061       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:05:54.701240       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:05:54.801652       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:54.801670       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:05:54.801652       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.784900     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c86f0bcd-5dae-4bb5-b1c2-bd9a3bea9060-tmp-volume\") pod \"kubernetes-dashboard-auth-658884f98f-455ns\" (UID: \"c86f0bcd-5dae-4bb5-b1c2-bd9a3bea9060\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-658884f98f-455ns"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.784992     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-prefix-dir\" (UniqueName: \"kubernetes.io/empty-dir/30a45022-1901-4ea6-8857-08ff9a85c27a-kubernetes-dashboard-kong-prefix-dir\") pod \"kubernetes-dashboard-kong-9849c64bd-9p6zf\" (UID: \"30a45022-1901-4ea6-8857-08ff9a85c27a\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785031     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47j2v\" (UniqueName: \"kubernetes.io/projected/f73d26a9-48d2-47fc-a241-1a7504297988-kube-api-access-47j2v\") pod \"kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr\" (UID: \"f73d26a9-48d2-47fc-a241-1a7504297988\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785063     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2c9c9b86-fd2a-4420-b98d-27dd078fe2c6-tmp-volume\") pod \"kubernetes-dashboard-web-5c9f966b98-gfhnn\" (UID: \"2c9c9b86-fd2a-4420-b98d-27dd078fe2c6\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-gfhnn"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785080     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-474hq\" (UniqueName: \"kubernetes.io/projected/c86f0bcd-5dae-4bb5-b1c2-bd9a3bea9060-kube-api-access-474hq\") pod \"kubernetes-dashboard-auth-658884f98f-455ns\" (UID: \"c86f0bcd-5dae-4bb5-b1c2-bd9a3bea9060\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-658884f98f-455ns"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785095     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ab309a53-9e4b-4a01-899a-797c7ba5208d-tmp-volume\") pod \"kubernetes-dashboard-api-78bc857d5c-fljnp\" (UID: \"ab309a53-9e4b-4a01-899a-797c7ba5208d\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-78bc857d5c-fljnp"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785116     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zzfm\" (UniqueName: \"kubernetes.io/projected/ab309a53-9e4b-4a01-899a-797c7ba5208d-kube-api-access-6zzfm\") pod \"kubernetes-dashboard-api-78bc857d5c-fljnp\" (UID: \"ab309a53-9e4b-4a01-899a-797c7ba5208d\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-78bc857d5c-fljnp"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785138     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f73d26a9-48d2-47fc-a241-1a7504297988-tmp-volume\") pod \"kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr\" (UID: \"f73d26a9-48d2-47fc-a241-1a7504297988\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785164     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7smc\" (UniqueName: \"kubernetes.io/projected/2c9c9b86-fd2a-4420-b98d-27dd078fe2c6-kube-api-access-k7smc\") pod \"kubernetes-dashboard-web-5c9f966b98-gfhnn\" (UID: \"2c9c9b86-fd2a-4420-b98d-27dd078fe2c6\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-gfhnn"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785222     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-tmp\" (UniqueName: \"kubernetes.io/empty-dir/30a45022-1901-4ea6-8857-08ff9a85c27a-kubernetes-dashboard-kong-tmp\") pod \"kubernetes-dashboard-kong-9849c64bd-9p6zf\" (UID: \"30a45022-1901-4ea6-8857-08ff9a85c27a\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf"
	Dec 19 03:05:59 embed-certs-805185 kubelet[737]: I1219 03:05:59.997824     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:59 embed-certs-805185 kubelet[737]: I1219 03:05:59.997922     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:00 embed-certs-805185 kubelet[737]: I1219 03:06:00.037195     737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 19 03:06:00 embed-certs-805185 kubelet[737]: I1219 03:06:00.097959     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-api-78bc857d5c-fljnp" podStartSLOduration=1.09098601 podStartE2EDuration="2.097935412s" podCreationTimestamp="2025-12-19 03:05:58 +0000 UTC" firstStartedPulling="2025-12-19 03:05:58.990618466 +0000 UTC m=+7.051227125" lastFinishedPulling="2025-12-19 03:05:59.997567856 +0000 UTC m=+8.058176527" observedRunningTime="2025-12-19 03:06:00.097689886 +0000 UTC m=+8.158298580" watchObservedRunningTime="2025-12-19 03:06:00.097935412 +0000 UTC m=+8.158544082"
	Dec 19 03:06:00 embed-certs-805185 kubelet[737]: I1219 03:06:00.934970     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:00 embed-certs-805185 kubelet[737]: I1219 03:06:00.936003     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:02 embed-certs-805185 kubelet[737]: I1219 03:06:02.793612     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr" podStartSLOduration=2.864491069 podStartE2EDuration="4.793587364s" podCreationTimestamp="2025-12-19 03:05:58 +0000 UTC" firstStartedPulling="2025-12-19 03:05:59.005628182 +0000 UTC m=+7.066236856" lastFinishedPulling="2025-12-19 03:06:00.934724484 +0000 UTC m=+8.995333151" observedRunningTime="2025-12-19 03:06:01.111916375 +0000 UTC m=+9.172525051" watchObservedRunningTime="2025-12-19 03:06:02.793587364 +0000 UTC m=+10.854196040"
	Dec 19 03:06:04 embed-certs-805185 kubelet[737]: I1219 03:06:04.028076     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:04 embed-certs-805185 kubelet[737]: I1219 03:06:04.028167     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:04 embed-certs-805185 kubelet[737]: I1219 03:06:04.121599     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-gfhnn" podStartSLOduration=1.100576683 podStartE2EDuration="6.121572519s" podCreationTimestamp="2025-12-19 03:05:58 +0000 UTC" firstStartedPulling="2025-12-19 03:05:59.006841332 +0000 UTC m=+7.067449988" lastFinishedPulling="2025-12-19 03:06:04.027837166 +0000 UTC m=+12.088445824" observedRunningTime="2025-12-19 03:06:04.121201067 +0000 UTC m=+12.181809743" watchObservedRunningTime="2025-12-19 03:06:04.121572519 +0000 UTC m=+12.182181195"
	Dec 19 03:06:05 embed-certs-805185 kubelet[737]: I1219 03:06:05.244202     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:05 embed-certs-805185 kubelet[737]: I1219 03:06:05.244300     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:06 embed-certs-805185 kubelet[737]: I1219 03:06:06.135487     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-auth-658884f98f-455ns" podStartSLOduration=1.904186191 podStartE2EDuration="8.135456486s" podCreationTimestamp="2025-12-19 03:05:58 +0000 UTC" firstStartedPulling="2025-12-19 03:05:59.012692427 +0000 UTC m=+7.073301081" lastFinishedPulling="2025-12-19 03:06:05.243962705 +0000 UTC m=+13.304571376" observedRunningTime="2025-12-19 03:06:06.134881051 +0000 UTC m=+14.195489728" watchObservedRunningTime="2025-12-19 03:06:06.135456486 +0000 UTC m=+14.196065161"
	Dec 19 03:06:12 embed-certs-805185 kubelet[737]: I1219 03:06:12.162006     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf" podStartSLOduration=2.749011678 podStartE2EDuration="14.161975971s" podCreationTimestamp="2025-12-19 03:05:58 +0000 UTC" firstStartedPulling="2025-12-19 03:05:59.023057738 +0000 UTC m=+7.083666406" lastFinishedPulling="2025-12-19 03:06:10.436022033 +0000 UTC m=+18.496630699" observedRunningTime="2025-12-19 03:06:12.161201474 +0000 UTC m=+20.221810169" watchObservedRunningTime="2025-12-19 03:06:12.161975971 +0000 UTC m=+20.222584647"
	Dec 19 03:06:26 embed-certs-805185 kubelet[737]: I1219 03:06:26.182763     737 scope.go:117] "RemoveContainer" containerID="3df3cb787711062528813854036926a57363b917a00c81ef68c3b8c0a675cfa2"
	
	
	==> kubernetes-dashboard [310b39bacccabe01a7800d05d30675f93096703212a17f66095da8c1865d22d2] <==
	E1219 03:13:01.082254       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:14:01.082648       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:15:01.082344       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	10.244.0.1 - - [19/Dec/2025:03:12:56 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:13:00 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:13:06 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:13:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:13:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:13:30 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:13:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:13:46 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:13:56 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:14:00 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:14:06 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:14:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:14:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:14:30 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:14:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:14:46 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:14:56 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:15:00 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:15:06 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:15:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:15:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:15:30 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	
	
	==> kubernetes-dashboard [5b4f7811505964d9e14b039acff4c61a760a6112e63bfff6242995499ee3b049] <==
	I1219 03:06:00.157650       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:06:00.157768       1 init.go:49] Using in-cluster config
	I1219 03:06:00.158043       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:06:00.158057       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:06:00.158064       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:06:00.158072       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:06:00.164066       1 main.go:119] "Successful initial request to the apiserver" version="v1.34.3"
	I1219 03:06:00.164098       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:06:00.190400       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 03:06:00.190937       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	I1219 03:06:30.196244       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [95cc887c80866d0ea33ef79f7654625e51e2590ee08a32fae89a8d46347f529a] <==
	I1219 03:06:04.155476       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:06:04.155552       1 init.go:48] Using in-cluster config
	I1219 03:06:04.155767       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [a0449cd05686367a0a816405c686858df4a264fbcacf43407705baff34ccbc5a] <==
	I1219 03:06:05.338222       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:06:05.338287       1 init.go:49] Using in-cluster config
	I1219 03:06:05.338471       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [3d7dd245b233f1e33bd4f191102b08c020335fd06eb801c42d94c75e88488904] <==
	W1219 03:15:09.688457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:11.691953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:11.697380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:13.700885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:13.705202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:15.708590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:15.712885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:17.715631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:17.719737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:19.722905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:19.728174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:21.731346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:21.735582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:23.739238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:23.743172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:25.746215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:25.751228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:27.753905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:27.757849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:29.760978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:29.766120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:31.769322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:31.773433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:33.776446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:33.781634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [3df3cb787711062528813854036926a57363b917a00c81ef68c3b8c0a675cfa2] <==
	I1219 03:05:55.403581       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:06:25.407035       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-805185 -n embed-certs-805185
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-805185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 03:06:41.456208    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/kindnet-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:14.561220    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:22.416849    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/kindnet-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:32.626654    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:38.269101    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/enable-default-cni-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:38.274374    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/enable-default-cni-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:38.284603    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/enable-default-cni-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:38.304880    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/enable-default-cni-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:38.345212    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/enable-default-cni-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:38.425576    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/enable-default-cni-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:38.586199    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/enable-default-cni-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:38.906836    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/enable-default-cni-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:39.547135    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/enable-default-cni-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:40.827799    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/enable-default-cni-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:42.699932    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:43.388924    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/enable-default-cni-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:48.485224    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/bridge-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:48.490558    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/bridge-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:48.500893    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/bridge-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:48.510120    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/enable-default-cni-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:48.521259    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/bridge-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:48.561665    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/bridge-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:48.641996    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/bridge-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:48.802504    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/bridge-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:49.123479    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/bridge-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:49.764050    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/bridge-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:51.045048    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/bridge-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:53.605853    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/bridge-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:58.726420    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/bridge-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:58.750590    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/enable-default-cni-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:08:08.967043    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/bridge-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:08:19.231136    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/enable-default-cni-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:08:29.448233    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/bridge-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:08:36.481460    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:08:44.337664    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/kindnet-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:08:59.335839    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/calico-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:08:59.341026    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/calico-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:08:59.351365    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/calico-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:08:59.371762    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/calico-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:08:59.412077    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/calico-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:08:59.492483    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/calico-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:08:59.652887    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/calico-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:08:59.973882    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/calico-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:00.191928    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/enable-default-cni-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:00.614055    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/calico-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:01.895004    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/calico-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:04.456014    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/calico-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:09.208230    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/custom-flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:09.213628    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/custom-flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:09.223892    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/custom-flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:09.244272    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/custom-flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:09.284599    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/custom-flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:09.364993    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/custom-flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:09.525437    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/custom-flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:09.576669    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/calico-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:09.846028    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/custom-flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:10.408540    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/bridge-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:10.486881    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/custom-flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:11.767174    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/custom-flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:14.327671    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/custom-flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:19.448682    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/custom-flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:19.817146    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/calico-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:29.688949    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/custom-flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:40.298089    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/calico-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:48.777278    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:09:50.169203    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/custom-flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:10:02.345804    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:10:16.467592    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:10:21.258830    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/calico-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:10:22.112451    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/enable-default-cni-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:10:31.129871    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/custom-flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:10:32.329444    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/bridge-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:10:45.748565    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:10:52.635807    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:11:00.493021    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/kindnet-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:11:20.322678    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:11:28.178774    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/kindnet-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:11:38.531046    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:11:43.179748    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/calico-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:11:53.050515    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/custom-flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:12:38.269336    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/enable-default-cni-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:12:42.700338    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:12:48.485913    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/bridge-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:13:05.394265    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:13:05.953156    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/enable-default-cni-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:13:16.170026    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/bridge-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:13:59.336558    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/calico-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:14:09.207772    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/custom-flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:14:27.020790    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/calico-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:14:36.891291    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/custom-flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-717222 -n default-k8s-diff-port-717222
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:15:39.559848159 +0000 UTC m=+3049.279367536
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-717222
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-717222:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59",
	        "Created": "2025-12-19T03:04:47.206515223Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 352308,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:05:53.385310779Z",
	            "FinishedAt": "2025-12-19T03:05:52.262245388Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59/hostname",
	        "HostsPath": "/var/lib/docker/containers/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59/hosts",
	        "LogPath": "/var/lib/docker/containers/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59-json.log",
	        "Name": "/default-k8s-diff-port-717222",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-717222:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-717222",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59",
	                "LowerDir": "/var/lib/docker/overlay2/73dc0bcaa024ed1b32e53abc59282bb71441b29d555c8c9ed18118be21650e76-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/73dc0bcaa024ed1b32e53abc59282bb71441b29d555c8c9ed18118be21650e76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/73dc0bcaa024ed1b32e53abc59282bb71441b29d555c8c9ed18118be21650e76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/73dc0bcaa024ed1b32e53abc59282bb71441b29d555c8c9ed18118be21650e76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-717222",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-717222/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-717222",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-717222",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-717222",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9d06f7aea24e94d05365ef4f03fb5f64c6b5272dae79bd49619bd1821269410e",
	            "SandboxKey": "/var/run/docker/netns/9d06f7aea24e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-717222": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "61bece957d17b845e006f35e9e337693d4d396daf2e4f93e70692be3f3288cbb",
	                    "EndpointID": "2c278581ff3b356f6bebafb94e691fc066cab71fa7bdd973be671471a23efca1",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ae:9c:c1:61:6a:e8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-717222",
	                        "f8284300a033"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-717222 -n default-k8s-diff-port-717222
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-717222 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-717222 logs -n 25: (1.215427787s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-821749 sudo containerd config dump                                                                                                                                                                                          │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo crio config                                                                                                                                                                                                     │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ delete  │ -p custom-flannel-821749                                                                                                                                                                                                                      │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-433330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-278042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ delete  │ -p disable-driver-mounts-507648                                                                                                                                                                                                               │ disable-driver-mounts-507648 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ stop    │ -p old-k8s-version-433330 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-278042 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-433330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-433330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p no-preload-278042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-278042 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p embed-certs-805185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p embed-certs-805185 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-717222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-717222 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-805185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-717222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:05:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:05:53.092301  352121 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:05:53.092394  352121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:53.092398  352121 out.go:374] Setting ErrFile to fd 2...
	I1219 03:05:53.092402  352121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:53.092674  352121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:05:53.093206  352121 out.go:368] Setting JSON to false
	I1219 03:05:53.094527  352121 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2904,"bootTime":1766110649,"procs":396,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:05:53.094626  352121 start.go:143] virtualization: kvm guest
	I1219 03:05:53.096521  352121 out.go:179] * [default-k8s-diff-port-717222] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:05:53.097989  352121 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:05:53.098033  352121 notify.go:221] Checking for updates...
	I1219 03:05:53.100179  352121 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:05:53.101370  352121 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:53.102535  352121 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:05:53.104239  352121 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:05:53.105473  352121 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:05:53.107110  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:53.107760  352121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:05:53.140217  352121 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:05:53.140357  352121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:53.211137  352121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 03:05:53.198937136 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:53.211231  352121 docker.go:319] overlay module found
	I1219 03:05:53.212919  352121 out.go:179] * Using the docker driver based on existing profile
	I1219 03:05:53.214070  352121 start.go:309] selected driver: docker
	I1219 03:05:53.214085  352121 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:53.214190  352121 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:05:53.214819  352121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:53.291099  352121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 03:05:53.277402722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:53.291489  352121 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:53.291541  352121 cni.go:84] Creating CNI manager for ""
	I1219 03:05:53.291616  352121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:53.291670  352121 start.go:353] cluster config:
	{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:53.293313  352121 out.go:179] * Starting "default-k8s-diff-port-717222" primary control-plane node in "default-k8s-diff-port-717222" cluster
	I1219 03:05:53.300833  352121 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:05:53.301994  352121 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:05:53.303248  352121 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:05:53.303295  352121 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 03:05:53.303311  352121 cache.go:65] Caching tarball of preloaded images
	I1219 03:05:53.303351  352121 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:05:53.303428  352121 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:05:53.303442  352121 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 03:05:53.303571  352121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json ...
	I1219 03:05:53.330621  352121 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:05:53.330652  352121 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:05:53.330671  352121 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:05:53.330767  352121 start.go:360] acquireMachinesLock for default-k8s-diff-port-717222: {Name:mk586c44d14a94c58ce6de7426a02f7319eb0716 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:53.330860  352121 start.go:364] duration metric: took 49.392µs to acquireMachinesLock for "default-k8s-diff-port-717222"
	I1219 03:05:53.330886  352121 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:05:53.330893  352121 fix.go:54] fixHost starting: 
	I1219 03:05:53.331229  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:53.353305  352121 fix.go:112] recreateIfNeeded on default-k8s-diff-port-717222: state=Stopped err=<nil>
	W1219 03:05:53.353340  352121 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:05:52.784975  350034 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:52.785039  350034 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:05:52.785129  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.788781  350034 addons.go:239] Setting addon default-storageclass=true in "embed-certs-805185"
	W1219 03:05:52.788860  350034 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:05:52.788932  350034 host.go:66] Checking if "embed-certs-805185" exists ...
	I1219 03:05:52.789603  350034 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:52.803693  350034 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:52.803884  350034 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:05:52.803963  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.829487  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.833372  350034 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:52.833419  350034 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:05:52.833502  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.838747  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.863871  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.921723  350034 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:52.937553  350034 node_ready.go:35] waiting up to 6m0s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:52.960476  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:52.967329  350034 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:05:52.987994  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:54.352944  350034 node_ready.go:49] node "embed-certs-805185" is "Ready"
	I1219 03:05:54.352990  350034 node_ready.go:38] duration metric: took 1.41538432s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:54.353006  350034 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:54.353065  350034 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:05:54.864404  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.903892508s)
	I1219 03:05:54.864464  350034 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.897096827s)
	I1219 03:05:54.864523  350034 api_server.go:72] duration metric: took 2.115189515s to wait for apiserver process to appear ...
	I1219 03:05:54.864541  350034 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:05:54.864542  350034 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:54.864474  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.876456755s)
	I1219 03:05:54.864569  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:54.868475  350034 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:05:54.869335  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:54.869359  350034 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:55.365237  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:55.371297  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:55.371332  350034 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:53.354901  352121 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-717222" ...
	I1219 03:05:53.354983  352121 cli_runner.go:164] Run: docker start default-k8s-diff-port-717222
	I1219 03:05:53.699823  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:53.723885  352121 kic.go:430] container "default-k8s-diff-port-717222" state is running.
	I1219 03:05:53.724437  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:53.748911  352121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json ...
	I1219 03:05:53.749250  352121 machine.go:94] provisionDockerMachine start ...
	I1219 03:05:53.749328  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:53.779816  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:53.780159  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:53.780174  352121 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:05:53.780830  352121 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1219 03:05:56.928761  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-717222
	
	I1219 03:05:56.928788  352121 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-717222"
	I1219 03:05:56.928865  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:56.949113  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:56.949333  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:56.949347  352121 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-717222 && echo "default-k8s-diff-port-717222" | sudo tee /etc/hostname
	I1219 03:05:57.104743  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-717222
	
	I1219 03:05:57.104815  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.123728  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:57.124049  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:57.124072  352121 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-717222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-717222/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-717222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:05:57.273322  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:05:57.273360  352121 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 03:05:57.273404  352121 ubuntu.go:190] setting up certificates
	I1219 03:05:57.273418  352121 provision.go:84] configureAuth start
	I1219 03:05:57.273481  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:57.295306  352121 provision.go:143] copyHostCerts
	I1219 03:05:57.295369  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem, removing ...
	I1219 03:05:57.295386  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem
	I1219 03:05:57.295456  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 03:05:57.295579  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem, removing ...
	I1219 03:05:57.295589  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem
	I1219 03:05:57.295628  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 03:05:57.295782  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem, removing ...
	I1219 03:05:57.295804  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem
	I1219 03:05:57.295852  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 03:05:57.295936  352121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-717222 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-717222 localhost minikube]
	I1219 03:05:57.394982  352121 provision.go:177] copyRemoteCerts
	I1219 03:05:57.395055  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:05:57.395092  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.415801  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:57.518597  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:05:57.537101  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:05:57.555959  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:05:57.575813  352121 provision.go:87] duration metric: took 302.381293ms to configureAuth
	I1219 03:05:57.575844  352121 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:05:57.576048  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:57.576149  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.595953  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:57.596182  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:57.596200  352121 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:05:58.066496  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:05:58.066522  352121 machine.go:97] duration metric: took 4.317251418s to provisionDockerMachine
	I1219 03:05:58.066537  352121 start.go:293] postStartSetup for "default-k8s-diff-port-717222" (driver="docker")
	I1219 03:05:58.066550  352121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:05:58.066636  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:05:58.066688  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.089735  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:55.753357  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:05:55.865655  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:55.871156  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1219 03:05:55.872281  350034 api_server.go:141] control plane version: v1.34.3
	I1219 03:05:55.872314  350034 api_server.go:131] duration metric: took 1.007762314s to wait for apiserver health ...
	I1219 03:05:55.872322  350034 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:55.876404  350034 system_pods.go:59] 8 kube-system pods found
	I1219 03:05:55.876440  350034 system_pods.go:61] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:55.876448  350034 system_pods.go:61] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:55.876455  350034 system_pods.go:61] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:55.876464  350034 system_pods.go:61] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:55.876469  350034 system_pods.go:61] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:55.876474  350034 system_pods.go:61] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:55.876484  350034 system_pods.go:61] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:55.876491  350034 system_pods.go:61] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:55.876501  350034 system_pods.go:74] duration metric: took 4.173475ms to wait for pod list to return data ...
	I1219 03:05:55.876508  350034 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:55.879480  350034 default_sa.go:45] found service account: "default"
	I1219 03:05:55.879505  350034 default_sa.go:55] duration metric: took 2.991473ms for default service account to be created ...
	I1219 03:05:55.879514  350034 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:55.882762  350034 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:55.882799  350034 system_pods.go:89] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:55.882811  350034 system_pods.go:89] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:55.882822  350034 system_pods.go:89] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:55.882831  350034 system_pods.go:89] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:55.882844  350034 system_pods.go:89] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:55.882860  350034 system_pods.go:89] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:55.882873  350034 system_pods.go:89] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:55.882885  350034 system_pods.go:89] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:55.882898  350034 system_pods.go:126] duration metric: took 3.377076ms to wait for k8s-apps to be running ...
	I1219 03:05:55.882911  350034 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:55.882965  350034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:58.664447  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (2.910960851s)
	I1219 03:05:58.664545  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:58.664639  350034 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.781656633s)
	I1219 03:05:58.664656  350034 system_svc.go:56] duration metric: took 2.781742979s WaitForService to wait for kubelet
	I1219 03:05:58.664665  350034 kubeadm.go:587] duration metric: took 5.915331625s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:58.664687  350034 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:58.675538  350034 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:58.675644  350034 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:58.675679  350034 node_conditions.go:105] duration metric: took 10.98619ms to run NodePressure ...
	I1219 03:05:58.675732  350034 start.go:242] waiting for startup goroutines ...
	I1219 03:05:58.853716  350034 addons.go:500] Verifying addon dashboard=true in "embed-certs-805185"
	I1219 03:05:58.854075  350034 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:58.881488  350034 out.go:179] * Verifying dashboard addon...
	I1219 03:05:58.197271  352121 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:05:58.201133  352121 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:05:58.201178  352121 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:05:58.201190  352121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 03:05:58.201247  352121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 03:05:58.201317  352121 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem -> 85362.pem in /etc/ssl/certs
	I1219 03:05:58.201402  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:05:58.208949  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:05:58.226900  352121 start.go:296] duration metric: took 160.349121ms for postStartSetup
	I1219 03:05:58.227004  352121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:05:58.227051  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.247047  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.350362  352121 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:05:58.356036  352121 fix.go:56] duration metric: took 5.025136862s for fixHost
	I1219 03:05:58.356065  352121 start.go:83] releasing machines lock for "default-k8s-diff-port-717222", held for 5.025188602s
	I1219 03:05:58.356141  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:58.375473  352121 ssh_runner.go:195] Run: cat /version.json
	I1219 03:05:58.375532  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.375568  352121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:05:58.375641  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.397142  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.397516  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.556811  352121 ssh_runner.go:195] Run: systemctl --version
	I1219 03:05:58.565677  352121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:05:58.614666  352121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:05:58.621187  352121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:05:58.621270  352121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:05:58.633014  352121 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:05:58.633047  352121 start.go:496] detecting cgroup driver to use...
	I1219 03:05:58.633088  352121 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:05:58.633137  352121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:05:58.657800  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:05:58.678804  352121 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:05:58.678883  352121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:05:58.705625  352121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:05:58.728926  352121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:05:58.831932  352121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:05:58.929486  352121 docker.go:234] disabling docker service ...
	I1219 03:05:58.929541  352121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:05:58.944770  352121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:05:58.958306  352121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:05:59.062259  352121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:05:59.150570  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:05:59.163650  352121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:05:59.178297  352121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:05:59.178363  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.187297  352121 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 03:05:59.187364  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.198433  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.208772  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.217632  352121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:05:59.226347  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.236018  352121 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.245884  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.255295  352121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:05:59.263398  352121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:05:59.271671  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:59.372273  352121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:05:59.546574  352121 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:05:59.546667  352121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:05:59.552176  352121 start.go:564] Will wait 60s for crictl version
	I1219 03:05:59.552257  352121 ssh_runner.go:195] Run: which crictl
	I1219 03:05:59.557121  352121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:05:59.591016  352121 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 03:05:59.591106  352121 ssh_runner.go:195] Run: crio --version
	I1219 03:05:59.628329  352121 ssh_runner.go:195] Run: crio --version
	I1219 03:05:59.666833  352121 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1219 03:05:58.884349  350034 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:05:58.887370  350034 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:05:58.887396  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.389654  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.888847  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.388430  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.668307  352121 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-717222 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:05:59.692145  352121 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1219 03:05:59.697760  352121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:59.712259  352121 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:05:59.712414  352121 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:05:59.712469  352121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:59.758952  352121 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:59.758977  352121 crio.go:433] Images already preloaded, skipping extraction
	I1219 03:05:59.759041  352121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:59.795008  352121 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:59.795037  352121 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:05:59.795046  352121 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.3 crio true true} ...
	I1219 03:05:59.795178  352121 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-717222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:05:59.795259  352121 ssh_runner.go:195] Run: crio config
	I1219 03:05:59.864102  352121 cni.go:84] Creating CNI manager for ""
	I1219 03:05:59.864128  352121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:59.864150  352121 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:05:59.864179  352121 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-717222 NodeName:default-k8s-diff-port-717222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:05:59.864362  352121 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-717222"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:05:59.864462  352121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:05:59.875851  352121 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:05:59.875922  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:05:59.884464  352121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1219 03:05:59.899134  352121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:05:59.915215  352121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1219 03:05:59.929131  352121 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:05:59.933193  352121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:59.944310  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:06:00.032535  352121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:06:00.050232  352121 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222 for IP: 192.168.94.2
	I1219 03:06:00.050255  352121 certs.go:195] generating shared ca certs ...
	I1219 03:06:00.050283  352121 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.050682  352121 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 03:06:00.050862  352121 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 03:06:00.050911  352121 certs.go:257] generating profile certs ...
	I1219 03:06:00.051065  352121 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/client.key
	I1219 03:06:00.051146  352121 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.key.a5493f25
	I1219 03:06:00.051208  352121 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.key
	I1219 03:06:00.051349  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem (1338 bytes)
	W1219 03:06:00.051384  352121 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536_empty.pem, impossibly tiny 0 bytes
	I1219 03:06:00.051393  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:06:00.051430  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:06:00.051457  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:06:00.051489  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 03:06:00.051550  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:06:00.053211  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:06:00.075839  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:06:00.100173  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:06:00.124618  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 03:06:00.149956  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:06:00.170353  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:06:00.189275  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:06:00.209284  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 03:06:00.228628  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:06:00.250172  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem --> /usr/share/ca-certificates/8536.pem (1338 bytes)
	I1219 03:06:00.275836  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /usr/share/ca-certificates/85362.pem (1708 bytes)
	I1219 03:06:00.299659  352121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:06:00.316102  352121 ssh_runner.go:195] Run: openssl version
	I1219 03:06:00.322905  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.332781  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:06:00.343197  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.348016  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.348075  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.393419  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:06:00.402299  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.411729  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8536.pem /etc/ssl/certs/8536.pem
	I1219 03:06:00.420351  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.424769  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:33 /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.424832  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.477146  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:06:00.485938  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.495504  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85362.pem /etc/ssl/certs/85362.pem
	I1219 03:06:00.504981  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.509807  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:33 /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.509870  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.553357  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:06:00.563012  352121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:06:00.568396  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:06:00.622821  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:06:00.682127  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:06:00.745366  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:06:00.806418  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:06:00.854164  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:06:00.896520  352121 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:06:00.896640  352121 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:06:00.896757  352121 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:06:00.932488  352121 cri.go:92] found id: "1340a2f59347d94cb67897c14b11276bb491943caa3b94ed7080db25e5d5dc78"
	I1219 03:06:00.932513  352121 cri.go:92] found id: "725faee3812c5d3136b764571b2c9b270517680beb590d21de31aad0b5d0b89a"
	I1219 03:06:00.932519  352121 cri.go:92] found id: "d2c496c53c69616e93f161ca59c69ebaa3d4de81e94460893270a29e321886eb"
	I1219 03:06:00.932523  352121 cri.go:92] found id: "0fb4e8910a64fc9fe34f2319ccb58566695768a8c137e01b391e3004481cb992"
	I1219 03:06:00.932538  352121 cri.go:92] found id: ""
	I1219 03:06:00.932585  352121 ssh_runner.go:195] Run: sudo runc list -f json
	W1219 03:06:00.949831  352121 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:06:00Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:06:00.949907  352121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:06:00.960453  352121 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:06:00.960473  352121 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:06:00.960520  352121 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:06:00.969747  352121 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:06:00.971295  352121 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-717222" does not appear in /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:06:00.972255  352121 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-4987/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-717222" cluster setting kubeconfig missing "default-k8s-diff-port-717222" context setting]
	I1219 03:06:00.973245  352121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.975092  352121 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:06:00.986333  352121 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1219 03:06:00.986370  352121 kubeadm.go:602] duration metric: took 25.891084ms to restartPrimaryControlPlane
	I1219 03:06:00.986381  352121 kubeadm.go:403] duration metric: took 89.870495ms to StartCluster
	I1219 03:06:00.986399  352121 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.986465  352121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:06:00.988984  352121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.989885  352121 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:06:00.990020  352121 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:06:00.990126  352121 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990162  352121 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-717222"
	W1219 03:06:00.990171  352121 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:06:00.990156  352121 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990202  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:00.990217  352121 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-717222"
	W1219 03:06:00.990229  352121 addons.go:248] addon dashboard should already be in state true
	I1219 03:06:00.990249  352121 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990136  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:06:00.990260  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:00.990288  352121 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-717222"
	I1219 03:06:00.990658  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.990728  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.990961  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.993354  352121 out.go:179] * Verifying Kubernetes components...
	I1219 03:06:00.994888  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:06:01.021617  352121 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:06:01.021640  352121 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:06:01.021711  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.025911  352121 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:06:01.027271  352121 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:06:01.027318  352121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:06:01.027479  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.027961  352121 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-717222"
	W1219 03:06:01.027983  352121 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:06:01.028009  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:01.028455  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:01.058853  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.066205  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.072839  352121 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:06:01.072864  352121 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:06:01.072921  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.100032  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.171523  352121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:06:01.187709  352121 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:06:01.188379  352121 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:06:01.193383  352121 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:06:01.197836  352121 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:06:01.203342  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:06:01.235359  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:06:02.386268  352121 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.188390942s)
	I1219 03:06:02.386368  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:06:03.037231  352121 node_ready.go:49] node "default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:03.037269  352121 node_ready.go:38] duration metric: took 1.848860443s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:06:03.037285  352121 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:06:03.037340  352121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:06:00.888463  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.390478  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.889172  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.392135  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.887803  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.395356  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.887597  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.388564  350034 kapi.go:107] duration metric: took 5.504214625s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:06:04.390500  350034 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-805185 addons enable metrics-server
	
	I1219 03:06:04.392729  350034 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1219 03:06:04.394312  350034 addons.go:546] duration metric: took 11.644922855s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:06:04.394358  350034 start.go:247] waiting for cluster config update ...
	I1219 03:06:04.394374  350034 start.go:256] writing updated cluster config ...
	I1219 03:06:04.394679  350034 ssh_runner.go:195] Run: rm -f paused
	I1219 03:06:04.399689  350034 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:04.404337  350034 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:03.829353  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.625958762s)
	I1219 03:06:03.829435  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.59404762s)
	I1219 03:06:06.147413  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.761002493s)
	I1219 03:06:06.147503  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:06:06.147635  352121 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.110275887s)
	I1219 03:06:06.147788  352121 api_server.go:72] duration metric: took 5.157862117s to wait for apiserver process to appear ...
	I1219 03:06:06.147803  352121 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:06:06.147822  352121 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:06:06.158489  352121 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1219 03:06:06.160663  352121 api_server.go:141] control plane version: v1.34.3
	I1219 03:06:06.160691  352121 api_server.go:131] duration metric: took 12.881794ms to wait for apiserver health ...
	I1219 03:06:06.160726  352121 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:06:06.167594  352121 system_pods.go:59] 8 kube-system pods found
	I1219 03:06:06.167645  352121 system_pods.go:61] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:06:06.167662  352121 system_pods.go:61] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:06:06.167671  352121 system_pods.go:61] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:06:06.167680  352121 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:06:06.167689  352121 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:06:06.167695  352121 system_pods.go:61] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:06:06.167733  352121 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:06:06.167740  352121 system_pods.go:61] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:06:06.167750  352121 system_pods.go:74] duration metric: took 7.015044ms to wait for pod list to return data ...
	I1219 03:06:06.167763  352121 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:06:06.172355  352121 default_sa.go:45] found service account: "default"
	I1219 03:06:06.172383  352121 default_sa.go:55] duration metric: took 4.612941ms for default service account to be created ...
	I1219 03:06:06.172394  352121 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:06:06.266672  352121 system_pods.go:86] 8 kube-system pods found
	I1219 03:06:06.266726  352121 system_pods.go:89] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:06:06.266739  352121 system_pods.go:89] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:06:06.266748  352121 system_pods.go:89] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:06:06.266766  352121 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:06:06.266780  352121 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:06:06.266794  352121 system_pods.go:89] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:06:06.266807  352121 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:06:06.266814  352121 system_pods.go:89] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:06:06.266828  352121 system_pods.go:126] duration metric: took 94.426368ms to wait for k8s-apps to be running ...
	I1219 03:06:06.266837  352121 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:06:06.266897  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:06:06.349488  352121 system_svc.go:56] duration metric: took 82.641061ms WaitForService to wait for kubelet
	I1219 03:06:06.349525  352121 kubeadm.go:587] duration metric: took 5.359602561s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:06:06.349549  352121 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:06:06.350103  352121 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-717222"
	I1219 03:06:06.350491  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:06.365789  352121 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:06:06.365826  352121 node_conditions.go:123] node cpu capacity is 8
	I1219 03:06:06.365844  352121 node_conditions.go:105] duration metric: took 16.288389ms to run NodePressure ...
	I1219 03:06:06.365861  352121 start.go:242] waiting for startup goroutines ...
	I1219 03:06:06.386126  352121 out.go:179] * Verifying dashboard addon...
	I1219 03:06:06.389114  352121 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:06:06.393439  352121 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:06:07.395667  352121 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:06:07.395691  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.895613  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:06:06.411012  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:08.413428  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:08.393607  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.894323  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.395092  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.892933  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.393259  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.894360  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.394632  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.893492  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.393217  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.892698  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.423636  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.893633  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.392860  352121 kapi.go:107] duration metric: took 8.003743298s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:06:14.394678  352121 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-717222 addons enable metrics-server
	
	I1219 03:06:14.396056  352121 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1219 03:06:10.911668  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:13.410917  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:15.411844  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:14.397218  352121 addons.go:546] duration metric: took 13.407198611s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:06:14.397266  352121 start.go:247] waiting for cluster config update ...
	I1219 03:06:14.397281  352121 start.go:256] writing updated cluster config ...
	I1219 03:06:14.397606  352121 ssh_runner.go:195] Run: rm -f paused
	I1219 03:06:14.402992  352121 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:14.409994  352121 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	W1219 03:06:16.416124  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:17.411983  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:19.909949  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:18.416198  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:20.916272  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:21.910435  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:24.410080  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:23.415501  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:25.416077  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:27.916104  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:26.910790  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:29.410901  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:30.410731  350034 pod_ready.go:94] pod "coredns-66bc5c9577-8gphx" is "Ready"
	I1219 03:06:30.410767  350034 pod_ready.go:86] duration metric: took 26.00640693s for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.413681  350034 pod_ready.go:83] waiting for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.418650  350034 pod_ready.go:94] pod "etcd-embed-certs-805185" is "Ready"
	I1219 03:06:30.418674  350034 pod_ready.go:86] duration metric: took 4.958889ms for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.420801  350034 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.424880  350034 pod_ready.go:94] pod "kube-apiserver-embed-certs-805185" is "Ready"
	I1219 03:06:30.424905  350034 pod_ready.go:86] duration metric: took 4.082079ms for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.427435  350034 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.608611  350034 pod_ready.go:94] pod "kube-controller-manager-embed-certs-805185" is "Ready"
	I1219 03:06:30.608639  350034 pod_ready.go:86] duration metric: took 181.178673ms for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.808837  350034 pod_ready.go:83] waiting for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.208761  350034 pod_ready.go:94] pod "kube-proxy-p8pqg" is "Ready"
	I1219 03:06:31.208794  350034 pod_ready.go:86] duration metric: took 399.926497ms for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.408111  350034 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.808631  350034 pod_ready.go:94] pod "kube-scheduler-embed-certs-805185" is "Ready"
	I1219 03:06:31.808661  350034 pod_ready.go:86] duration metric: took 400.520634ms for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.808676  350034 pod_ready.go:40] duration metric: took 27.408945135s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:31.854031  350034 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:06:31.855666  350034 out.go:179] * Done! kubectl is now configured to use "embed-certs-805185" cluster and "default" namespace by default
	W1219 03:06:29.916349  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:31.916589  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:34.416790  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:36.915948  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	I1219 03:06:37.416133  352121 pod_ready.go:94] pod "coredns-66bc5c9577-dskxl" is "Ready"
	I1219 03:06:37.416166  352121 pod_ready.go:86] duration metric: took 23.006142155s for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.418804  352121 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.423037  352121 pod_ready.go:94] pod "etcd-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.423066  352121 pod_ready.go:86] duration metric: took 4.235839ms for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.425227  352121 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.429099  352121 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.429133  352121 pod_ready.go:86] duration metric: took 3.885497ms for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.430894  352121 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.614208  352121 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.614234  352121 pod_ready.go:86] duration metric: took 183.319695ms for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.814884  352121 pod_ready.go:83] waiting for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.214236  352121 pod_ready.go:94] pod "kube-proxy-mr7c8" is "Ready"
	I1219 03:06:38.214264  352121 pod_ready.go:86] duration metric: took 399.351737ms for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.413883  352121 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.813953  352121 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:38.813980  352121 pod_ready.go:86] duration metric: took 400.070957ms for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.813992  352121 pod_ready.go:40] duration metric: took 24.410965975s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:38.857548  352121 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:06:38.860155  352121 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-717222" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.105809849Z" level=info msg="Created container 35d02beeb2185abe39e28a11af6121432270e8cbb693e170c7e4268ee5b3b270: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq/clear-stale-pid" id=5a645826-349a-438a-8096-df1ef85fa13f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.106574675Z" level=info msg="Starting container: 35d02beeb2185abe39e28a11af6121432270e8cbb693e170c7e4268ee5b3b270" id=cac0719a-4055-47b5-9cd1-db557fa3cd0a name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.108867589Z" level=info msg="Started container" PID=1966 containerID=35d02beeb2185abe39e28a11af6121432270e8cbb693e170c7e4268ee5b3b270 description=kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq/clear-stale-pid id=cac0719a-4055-47b5-9cd1-db557fa3cd0a name=/runtime.v1.RuntimeService/StartContainer sandboxID=8667e85e98055a9a632295b2700a5f7426d8a381a2d0a88f63c9f25413fcbb91
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.244843017Z" level=info msg="Checking image status: kong:3.9" id=0cec8e99-8e10-454e-875b-ea15d4a209cd name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.245030729Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.247083766Z" level=info msg="Checking image status: kong:3.9" id=3f2254f1-a52b-4104-87c2-661e1bd23ec3 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.247306541Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.25336671Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq/proxy" id=94e64d89-1339-4146-8f9c-53f734cb5589 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.253525887Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.260510197Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.261326368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.301363315Z" level=info msg="Created container dd2d524ddac2329c06c87daf6c7c8f6eb8a85bde1ae5d599415346c87bfae650: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq/proxy" id=94e64d89-1339-4146-8f9c-53f734cb5589 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.30215616Z" level=info msg="Starting container: dd2d524ddac2329c06c87daf6c7c8f6eb8a85bde1ae5d599415346c87bfae650" id=12ee788e-8289-4593-8fc9-302f77b8987b name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.304379149Z" level=info msg="Started container" PID=1977 containerID=dd2d524ddac2329c06c87daf6c7c8f6eb8a85bde1ae5d599415346c87bfae650 description=kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq/proxy id=12ee788e-8289-4593-8fc9-302f77b8987b name=/runtime.v1.RuntimeService/StartContainer sandboxID=8667e85e98055a9a632295b2700a5f7426d8a381a2d0a88f63c9f25413fcbb91
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.293364694Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7a2b6641-2330-4f1c-8ac3-bd5fc486ac9a name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.294343816Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=25406107-20f3-4be8-a6d5-7899eb74be0f name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.295572666Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ee62b1e2-d30f-4a02-8252-3cb5cb6d1371 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.295760296Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.302496713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.302683962Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/79e095d41c86d78a8608ae66d21f2385ceb6046ec971057f76b5fefa76ae5f30/merged/etc/passwd: no such file or directory"
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.302750865Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/79e095d41c86d78a8608ae66d21f2385ceb6046ec971057f76b5fefa76ae5f30/merged/etc/group: no such file or directory"
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.303093477Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.338341513Z" level=info msg="Created container d997c9b36079f5a7a989c5b8afff9f0cb04f9dfcbd3977915ee11b3d32ada9a6: kube-system/storage-provisioner/storage-provisioner" id=ee62b1e2-d30f-4a02-8252-3cb5cb6d1371 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.339046763Z" level=info msg="Starting container: d997c9b36079f5a7a989c5b8afff9f0cb04f9dfcbd3977915ee11b3d32ada9a6" id=43566eeb-3dc8-4778-bebc-e3a50bf31b5d name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.341081965Z" level=info msg="Started container" PID=3395 containerID=d997c9b36079f5a7a989c5b8afff9f0cb04f9dfcbd3977915ee11b3d32ada9a6 description=kube-system/storage-provisioner/storage-provisioner id=43566eeb-3dc8-4778-bebc-e3a50bf31b5d name=/runtime.v1.RuntimeService/StartContainer sandboxID=470b7f13281e4c61793ea7eeab1f00af8c464b75a182af8abe8a9e8fcfc00b9a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	d997c9b36079f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           9 minutes ago       Running             storage-provisioner                    1                   470b7f13281e4       storage-provisioner                                     kube-system
	dd2d524ddac23       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                           9 minutes ago       Running             proxy                                  0                   8667e85e98055       kubernetes-dashboard-kong-9849c64bd-jnmzq               kubernetes-dashboard
	35d02beeb2185       docker.io/library/kong@sha256:73ac10ce4d2c5b3b8b4acd6c8117b4e72d1a201d95be2d51aeae8324d776a108                             9 minutes ago       Exited              clear-stale-pid                        0                   8667e85e98055       kubernetes-dashboard-kong-9849c64bd-jnmzq               kubernetes-dashboard
	5fe7d916a364f       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   9 minutes ago       Running             kubernetes-dashboard-metrics-scraper   0                   8df1f8a8e9b8c       kubernetes-dashboard-metrics-scraper-7685fd8b77-pwmcj   kubernetes-dashboard
	efed0d8824978       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               9 minutes ago       Running             kubernetes-dashboard-web               0                   85c0932639a7f       kubernetes-dashboard-web-5c9f966b98-pmb5t               kubernetes-dashboard
	6e3eff743b9cd       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              9 minutes ago       Running             kubernetes-dashboard-auth              0                   226a7334560d4       kubernetes-dashboard-auth-76bb77b695-58swx              kubernetes-dashboard
	5c21853c28563       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               9 minutes ago       Running             kubernetes-dashboard-api               0                   442cfc6f80155       kubernetes-dashboard-api-6c4454678d-vmnj2               kubernetes-dashboard
	561ec43405227       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                                           9 minutes ago       Running             busybox                                1                   bdce9bd9d632c       busybox                                                 default
	2592b062e7872       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                           9 minutes ago       Running             coredns                                0                   ad0fcb07810bf       coredns-66bc5c9577-dskxl                                kube-system
	dbbb6a255de37       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           9 minutes ago       Exited              storage-provisioner                    0                   470b7f13281e4       storage-provisioner                                     kube-system
	d7b31f6039b4c       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                                           9 minutes ago       Running             kindnet-cni                            0                   42aa8ce5cba75       kindnet-zgcrn                                           kube-system
	cd178b86eed6d       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                           9 minutes ago       Running             kube-proxy                             0                   84cdb0361e2e6       kube-proxy-mr7c8                                        kube-system
	1340a2f59347d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                           9 minutes ago       Running             etcd                                   0                   ccb6ae903ae17       etcd-default-k8s-diff-port-717222                       kube-system
	725faee3812c5       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                           9 minutes ago       Running             kube-scheduler                         0                   2ad392cb5e514       kube-scheduler-default-k8s-diff-port-717222             kube-system
	d2c496c53c696       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                           9 minutes ago       Running             kube-apiserver                         0                   ec833bb6abd84       kube-apiserver-default-k8s-diff-port-717222             kube-system
	0fb4e8910a64f       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                           9 minutes ago       Running             kube-controller-manager                0                   6217f80d4b77a       kube-controller-manager-default-k8s-diff-port-717222    kube-system
	
	
	==> coredns [2592b062e787245c17fcfad40e551290657aea425be5e044174243d7524bc317] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55129 - 16165 "HINFO IN 3453254911344364497.3052208195299777284. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.04385742s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-717222
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-717222
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=default-k8s-diff-port-717222
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_05_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:05:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-717222
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:15:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:15:14 +0000   Fri, 19 Dec 2025 03:04:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:15:14 +0000   Fri, 19 Dec 2025 03:04:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:15:14 +0000   Fri, 19 Dec 2025 03:04:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:15:14 +0000   Fri, 19 Dec 2025 03:05:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-717222
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                301b16dc-31c1-4466-a363-b4e4f9941cd5
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-dskxl                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-default-k8s-diff-port-717222                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-zgcrn                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-default-k8s-diff-port-717222              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-717222     200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-mr7c8                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-default-k8s-diff-port-717222              100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-api-6c4454678d-vmnj2                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9m33s
	  kubernetes-dashboard        kubernetes-dashboard-auth-76bb77b695-58swx               100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9m33s
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-jnmzq                0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m33s
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-pwmcj    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9m33s
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-pmb5t                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     9m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1250m (15%)  1100m (13%)
	  memory             1020Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m36s                  kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node default-k8s-diff-port-717222 event: Registered Node default-k8s-diff-port-717222 in Controller
	  Normal  NodeReady                10m                    kubelet          Node default-k8s-diff-port-717222 status is now: NodeReady
	  Normal  Starting                 9m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m40s (x8 over 9m40s)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m40s (x8 over 9m40s)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m40s (x8 over 9m40s)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m34s                  node-controller  Node default-k8s-diff-port-717222 event: Registered Node default-k8s-diff-port-717222 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [1340a2f59347d94cb67897c14b11276bb491943caa3b94ed7080db25e5d5dc78] <==
	{"level":"warn","ts":"2025-12-19T03:06:02.250297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:02.255026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:02.265621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:02.274876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:02.285161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:02.306338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:02.321181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:02.329974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:02.340725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:02.379475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:02.384564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:02.394467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:02.475486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.378732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.407834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.459907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.484810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.498580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.516121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.532033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.548224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.567442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.583249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.608694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.623918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49614","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:15:40 up 58 min,  0 user,  load average: 0.39, 0.95, 1.67
	Linux default-k8s-diff-port-717222 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d7b31f6039b4c71a1c774e7e89359f49dd4bca0b72f47cce0a7db10b8a4eb339] <==
	I1219 03:13:34.143898       1 main.go:301] handling current node
	I1219 03:13:44.143229       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:13:44.143277       1 main.go:301] handling current node
	I1219 03:13:54.143325       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:13:54.143362       1 main.go:301] handling current node
	I1219 03:14:04.150799       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:14:04.150841       1 main.go:301] handling current node
	I1219 03:14:14.143195       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:14:14.143237       1 main.go:301] handling current node
	I1219 03:14:24.144476       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:14:24.144523       1 main.go:301] handling current node
	I1219 03:14:34.143195       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:14:34.143226       1 main.go:301] handling current node
	I1219 03:14:44.143253       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:14:44.143286       1 main.go:301] handling current node
	I1219 03:14:54.143661       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:14:54.143693       1 main.go:301] handling current node
	I1219 03:15:04.151811       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:15:04.151848       1 main.go:301] handling current node
	I1219 03:15:14.143715       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:15:14.143773       1 main.go:301] handling current node
	I1219 03:15:24.143268       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:15:24.143299       1 main.go:301] handling current node
	I1219 03:15:34.150815       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:15:34.150858       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d2c496c53c69616e93f161ca59c69ebaa3d4de81e94460893270a29e321886eb] <==
	I1219 03:06:06.020926       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 03:06:06.068365       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:06:06.073897       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:06:06.084961       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.107.87.247"}
	I1219 03:06:06.089336       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.220.200"}
	I1219 03:06:06.096055       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.107.37.89"}
	I1219 03:06:06.097724       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.106.126.95"}
	I1219 03:06:06.105426       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.103.60.201"}
	I1219 03:06:06.111150       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 03:06:06.111150       1 controller.go:667] quota admission added evaluator for: deployments.apps
	W1219 03:06:06.366398       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.407675       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.460136       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.484666       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.498913       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.516026       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.532002       1 logging.go:55] [core] [Channel #283 SubChannel #284]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.548159       1 logging.go:55] [core] [Channel #287 SubChannel #288]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.564547       1 logging.go:55] [core] [Channel #291 SubChannel #292]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 03:06:06.583215       1 logging.go:55] [core] [Channel #295 SubChannel #296]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1219 03:06:06.599243       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	W1219 03:06:06.606221       1 logging.go:55] [core] [Channel #299 SubChannel #300]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.623365       1 logging.go:55] [core] [Channel #303 SubChannel #304]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1219 03:06:06.946827       1 controller.go:667] quota admission added evaluator for: endpoints
	I1219 03:06:07.061226       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0fb4e8910a64fc9fe34f2319ccb58566695768a8c137e01b391e3004481cb992] <==
	I1219 03:06:06.443886       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 03:06:06.448122       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1219 03:06:06.448186       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1219 03:06:06.448203       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1219 03:06:06.448213       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1219 03:06:06.465415       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1219 03:06:06.465574       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1219 03:06:06.465610       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1219 03:06:06.465621       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1219 03:06:06.465629       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1219 03:06:06.469733       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1219 03:06:06.472102       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1219 03:06:06.475316       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1219 03:06:06.478047       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 03:06:06.492013       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1219 03:06:06.492117       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1219 03:06:06.492629       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1219 03:06:06.493189       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1219 03:06:06.493873       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 03:06:07.594172       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:06:07.650019       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:06:07.681489       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:06:07.691740       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:06:07.691828       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 03:06:07.691843       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [cd178b86eed6df4e301822d1cb033cde8457245acc5c1565f60ccb12d47ee2aa] <==
	I1219 03:06:03.629338       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:06:03.701880       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:06:03.802296       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:06:03.802339       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1219 03:06:03.802448       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:06:03.830859       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:06:03.830933       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:06:03.839110       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:06:03.840168       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:06:03.840214       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:06:03.842696       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:06:03.842727       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:06:03.842694       1 config.go:309] "Starting node config controller"
	I1219 03:06:03.842762       1 config.go:200] "Starting service config controller"
	I1219 03:06:03.842769       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:06:03.842768       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:06:03.842972       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:06:03.843007       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:06:03.942900       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:06:03.942899       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:06:03.942907       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:06:03.943205       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [725faee3812c5d3136b764571b2c9b270517680beb590d21de31aad0b5d0b89a] <==
	I1219 03:06:01.472873       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:06:03.026871       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:06:03.026986       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1219 03:06:03.027002       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:06:03.027011       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:06:03.089314       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:06:03.089358       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:06:03.093055       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:06:03.093084       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:06:03.093364       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:06:03.094336       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:06:03.193871       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067763     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kong-custom-dbless-config-volume\" (UniqueName: \"kubernetes.io/configmap/3331ddda-eb3e-4cee-bfd1-ec7b71a257e7-kong-custom-dbless-config-volume\") pod \"kubernetes-dashboard-kong-9849c64bd-jnmzq\" (UID: \"3331ddda-eb3e-4cee-bfd1-ec7b71a257e7\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067795     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5wfw\" (UniqueName: \"kubernetes.io/projected/b73c0a8a-5571-4af3-bd91-2ce3ab1f7b8e-kube-api-access-f5wfw\") pod \"kubernetes-dashboard-api-6c4454678d-vmnj2\" (UID: \"b73c0a8a-5571-4af3-bd91-2ce3ab1f7b8e\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-6c4454678d-vmnj2"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067823     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-prefix-dir\" (UniqueName: \"kubernetes.io/empty-dir/3331ddda-eb3e-4cee-bfd1-ec7b71a257e7-kubernetes-dashboard-kong-prefix-dir\") pod \"kubernetes-dashboard-kong-9849c64bd-jnmzq\" (UID: \"3331ddda-eb3e-4cee-bfd1-ec7b71a257e7\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067847     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-tmp\" (UniqueName: \"kubernetes.io/empty-dir/3331ddda-eb3e-4cee-bfd1-ec7b71a257e7-kubernetes-dashboard-kong-tmp\") pod \"kubernetes-dashboard-kong-9849c64bd-jnmzq\" (UID: \"3331ddda-eb3e-4cee-bfd1-ec7b71a257e7\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067872     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lhhh\" (UniqueName: \"kubernetes.io/projected/af7e569e-9279-40a6-aa17-cda231d867a2-kube-api-access-4lhhh\") pod \"kubernetes-dashboard-web-5c9f966b98-pmb5t\" (UID: \"af7e569e-9279-40a6-aa17-cda231d867a2\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-pmb5t"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067900     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmswx\" (UniqueName: \"kubernetes.io/projected/24aef03d-85db-4df3-a193-f13c807f84de-kube-api-access-bmswx\") pod \"kubernetes-dashboard-auth-76bb77b695-58swx\" (UID: \"24aef03d-85db-4df3-a193-f13c807f84de\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-76bb77b695-58swx"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067924     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b73c0a8a-5571-4af3-bd91-2ce3ab1f7b8e-tmp-volume\") pod \"kubernetes-dashboard-api-6c4454678d-vmnj2\" (UID: \"b73c0a8a-5571-4af3-bd91-2ce3ab1f7b8e\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-6c4454678d-vmnj2"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067959     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/af7e569e-9279-40a6-aa17-cda231d867a2-tmp-volume\") pod \"kubernetes-dashboard-web-5c9f966b98-pmb5t\" (UID: \"af7e569e-9279-40a6-aa17-cda231d867a2\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-pmb5t"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.068002     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/24aef03d-85db-4df3-a193-f13c807f84de-tmp-volume\") pod \"kubernetes-dashboard-auth-76bb77b695-58swx\" (UID: \"24aef03d-85db-4df3-a193-f13c807f84de\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-76bb77b695-58swx"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.068024     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9f54900a-1ad0-4593-8236-0a1dc1a88e64-tmp-volume\") pod \"kubernetes-dashboard-metrics-scraper-7685fd8b77-pwmcj\" (UID: \"9f54900a-1ad0-4593-8236-0a1dc1a88e64\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-pwmcj"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.110436     727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 19 03:06:08 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:08.735645     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:08 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:08.735776     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:09 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:09.227142     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-api-6c4454678d-vmnj2" podStartSLOduration=0.849461056 podStartE2EDuration="2.227114712s" podCreationTimestamp="2025-12-19 03:06:07 +0000 UTC" firstStartedPulling="2025-12-19 03:06:07.357732164 +0000 UTC m=+7.304652030" lastFinishedPulling="2025-12-19 03:06:08.735385823 +0000 UTC m=+8.682305686" observedRunningTime="2025-12-19 03:06:09.226299035 +0000 UTC m=+9.173218910" watchObservedRunningTime="2025-12-19 03:06:09.227114712 +0000 UTC m=+9.174034588"
	Dec 19 03:06:10 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:10.419464     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:10 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:10.419559     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:11 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:11.234033     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-auth-76bb77b695-58swx" podStartSLOduration=1.191233274 podStartE2EDuration="4.234006036s" podCreationTimestamp="2025-12-19 03:06:07 +0000 UTC" firstStartedPulling="2025-12-19 03:06:07.376415045 +0000 UTC m=+7.323334914" lastFinishedPulling="2025-12-19 03:06:10.419187817 +0000 UTC m=+10.366107676" observedRunningTime="2025-12-19 03:06:11.233777792 +0000 UTC m=+11.180697668" watchObservedRunningTime="2025-12-19 03:06:11.234006036 +0000 UTC m=+11.180925911"
	Dec 19 03:06:13 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:13.311379     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:13 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:13.311529     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:14 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:14.115193     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:14 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:14.115296     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:14 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:14.241972     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-pwmcj" podStartSLOduration=0.508150908 podStartE2EDuration="7.241948013s" podCreationTimestamp="2025-12-19 03:06:07 +0000 UTC" firstStartedPulling="2025-12-19 03:06:07.38113198 +0000 UTC m=+7.328051833" lastFinishedPulling="2025-12-19 03:06:14.11492908 +0000 UTC m=+14.061848938" observedRunningTime="2025-12-19 03:06:14.24166888 +0000 UTC m=+14.188588771" watchObservedRunningTime="2025-12-19 03:06:14.241948013 +0000 UTC m=+14.188867888"
	Dec 19 03:06:14 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:14.255081     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-pmb5t" podStartSLOduration=1.322160186 podStartE2EDuration="7.255055586s" podCreationTimestamp="2025-12-19 03:06:07 +0000 UTC" firstStartedPulling="2025-12-19 03:06:07.378248795 +0000 UTC m=+7.325168663" lastFinishedPulling="2025-12-19 03:06:13.311144187 +0000 UTC m=+13.258064063" observedRunningTime="2025-12-19 03:06:14.254652221 +0000 UTC m=+14.201572121" watchObservedRunningTime="2025-12-19 03:06:14.255055586 +0000 UTC m=+14.201975462"
	Dec 19 03:06:19 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:19.265507     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq" podStartSLOduration=1.591075171 podStartE2EDuration="12.26547879s" podCreationTimestamp="2025-12-19 03:06:07 +0000 UTC" firstStartedPulling="2025-12-19 03:06:07.391768736 +0000 UTC m=+7.338688592" lastFinishedPulling="2025-12-19 03:06:18.066172352 +0000 UTC m=+18.013092211" observedRunningTime="2025-12-19 03:06:19.265420913 +0000 UTC m=+19.212340789" watchObservedRunningTime="2025-12-19 03:06:19.26547879 +0000 UTC m=+19.212398667"
	Dec 19 03:06:34 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:34.292974     727 scope.go:117] "RemoveContainer" containerID="dbbb6a255de373670b1e2cdd1093e839a1db9c2f1eff3fcb467e13bda9db456d"
	
	
	==> kubernetes-dashboard [5c21853c28563a691ef440986410f18c67ba23dbc122b1d94b9cce6075bdfb75] <==
	I1219 03:06:08.860787       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:06:08.860900       1 init.go:49] Using in-cluster config
	I1219 03:06:08.861145       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:06:08.861164       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:06:08.861172       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:06:08.861177       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:06:08.868063       1 main.go:119] "Successful initial request to the apiserver" version="v1.34.3"
	I1219 03:06:08.868091       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:06:08.944605       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 03:06:08.948604       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	I1219 03:06:38.953964       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [5fe7d916a364f331d8aa2665bfdbeab1fff27316fa0fee64cb7834c35bef418d] <==
	10.244.0.1 - - [19/Dec/2025:03:13:02 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:13:09 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:13:12 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:13:22 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:13:32 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:13:39 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:13:42 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:13:52 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:14:02 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:14:09 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:14:12 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:14:22 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:14:32 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:14:39 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:14:42 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:14:52 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:15:02 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:15:09 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:15:12 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:15:22 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:15:32 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:15:39 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	E1219 03:13:14.229983       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:14:14.230449       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:15:14.230020       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	
	
	==> kubernetes-dashboard [6e3eff743b9cdb70ef6cbf70a1039d5cff4c8fe2e48d5a15acb23261f2b4507e] <==
	I1219 03:06:10.539923       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:06:10.540000       1 init.go:49] Using in-cluster config
	I1219 03:06:10.540134       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [efed0d882497800414676940b84aa41e026026efe618a2d160430de527d8e1f6] <==
	I1219 03:06:13.510889       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:06:13.510946       1 init.go:48] Using in-cluster config
	I1219 03:06:13.511172       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [d997c9b36079f5a7a989c5b8afff9f0cb04f9dfcbd3977915ee11b3d32ada9a6] <==
	W1219 03:15:15.719585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:17.723541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:17.728039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:19.730873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:19.736264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:21.739389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:21.743179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:23.746873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:23.751433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:25.754335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:25.758662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:27.762821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:27.768381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:29.771335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:29.774952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:31.778153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:31.782100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:33.786083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:33.794414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:35.798846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:35.803201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:37.807099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:37.812318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:39.816440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:15:39.820568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dbbb6a255de373670b1e2cdd1093e839a1db9c2f1eff3fcb467e13bda9db456d] <==
	I1219 03:06:03.592106       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:06:33.595312       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-717222 -n default-k8s-diff-port-717222
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-717222 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 03:14:48.778104    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-433330 -n old-k8s-version-433330
start_stop_delete_test.go:285: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:23:48.735234546 +0000 UTC m=+3538.454754002
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-433330 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-433330 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (60.815896ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "dashboard-metrics-scraper" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-433330 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-433330
helpers_test.go:244: (dbg) docker inspect old-k8s-version-433330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18",
	        "Created": "2025-12-19T03:03:42.290394762Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 338430,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:05:00.142567023Z",
	            "FinishedAt": "2025-12-19T03:04:59.042546116Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18/hosts",
	        "LogPath": "/var/lib/docker/containers/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18-json.log",
	        "Name": "/old-k8s-version-433330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-433330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-433330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18",
	                "LowerDir": "/var/lib/docker/overlay2/6032c39fb2e99c6be5b0226eff9b2f93ff205fafe5352aa25ebce819f3079e50-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6032c39fb2e99c6be5b0226eff9b2f93ff205fafe5352aa25ebce819f3079e50/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6032c39fb2e99c6be5b0226eff9b2f93ff205fafe5352aa25ebce819f3079e50/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6032c39fb2e99c6be5b0226eff9b2f93ff205fafe5352aa25ebce819f3079e50/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-433330",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-433330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-433330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-433330",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-433330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "dccc35fac12f6f9c606670826d973be968de80e11b47147853405d102ecda025",
	            "SandboxKey": "/var/run/docker/netns/dccc35fac12f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-433330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ebf807015d65c8db1230e3a313a61194a5685b902dee458d727805bc340fe33d",
	                    "EndpointID": "a6443b6616b36367152fe2b3630db96df1ad95a1774c32a4f279e3a106c8f1e6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "8e:3f:cd:fb:94:67",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-433330",
	                        "ed00f1899233"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-433330 -n old-k8s-version-433330
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-433330 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-433330 logs -n 25: (1.222724258s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-821749 sudo containerd config dump                                                                                                                                                                                          │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo crio config                                                                                                                                                                                                     │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ delete  │ -p custom-flannel-821749                                                                                                                                                                                                                      │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-433330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-278042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ delete  │ -p disable-driver-mounts-507648                                                                                                                                                                                                               │ disable-driver-mounts-507648 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ stop    │ -p old-k8s-version-433330 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-278042 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-433330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-433330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p no-preload-278042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-278042 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p embed-certs-805185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p embed-certs-805185 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-717222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-717222 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-805185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-717222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:05:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:05:53.092301  352121 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:05:53.092394  352121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:53.092398  352121 out.go:374] Setting ErrFile to fd 2...
	I1219 03:05:53.092402  352121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:53.092674  352121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:05:53.093206  352121 out.go:368] Setting JSON to false
	I1219 03:05:53.094527  352121 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2904,"bootTime":1766110649,"procs":396,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:05:53.094626  352121 start.go:143] virtualization: kvm guest
	I1219 03:05:53.096521  352121 out.go:179] * [default-k8s-diff-port-717222] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:05:53.097989  352121 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:05:53.098033  352121 notify.go:221] Checking for updates...
	I1219 03:05:53.100179  352121 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:05:53.101370  352121 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:53.102535  352121 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:05:53.104239  352121 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:05:53.105473  352121 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:05:53.107110  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:53.107760  352121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:05:53.140217  352121 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:05:53.140357  352121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:53.211137  352121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 03:05:53.198937136 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:53.211231  352121 docker.go:319] overlay module found
	I1219 03:05:53.212919  352121 out.go:179] * Using the docker driver based on existing profile
	I1219 03:05:53.214070  352121 start.go:309] selected driver: docker
	I1219 03:05:53.214085  352121 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:53.214190  352121 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:05:53.214819  352121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:53.291099  352121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 03:05:53.277402722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:53.291489  352121 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:53.291541  352121 cni.go:84] Creating CNI manager for ""
	I1219 03:05:53.291616  352121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:53.291670  352121 start.go:353] cluster config:
	{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:53.293313  352121 out.go:179] * Starting "default-k8s-diff-port-717222" primary control-plane node in "default-k8s-diff-port-717222" cluster
	I1219 03:05:53.300833  352121 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:05:53.301994  352121 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:05:53.303248  352121 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:05:53.303295  352121 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 03:05:53.303311  352121 cache.go:65] Caching tarball of preloaded images
	I1219 03:05:53.303351  352121 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:05:53.303428  352121 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:05:53.303442  352121 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 03:05:53.303571  352121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json ...
	I1219 03:05:53.330621  352121 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:05:53.330652  352121 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:05:53.330671  352121 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:05:53.330767  352121 start.go:360] acquireMachinesLock for default-k8s-diff-port-717222: {Name:mk586c44d14a94c58ce6de7426a02f7319eb0716 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:53.330860  352121 start.go:364] duration metric: took 49.392µs to acquireMachinesLock for "default-k8s-diff-port-717222"
	I1219 03:05:53.330886  352121 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:05:53.330893  352121 fix.go:54] fixHost starting: 
	I1219 03:05:53.331229  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:53.353305  352121 fix.go:112] recreateIfNeeded on default-k8s-diff-port-717222: state=Stopped err=<nil>
	W1219 03:05:53.353340  352121 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:05:52.784975  350034 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:52.785039  350034 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:05:52.785129  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.788781  350034 addons.go:239] Setting addon default-storageclass=true in "embed-certs-805185"
	W1219 03:05:52.788860  350034 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:05:52.788932  350034 host.go:66] Checking if "embed-certs-805185" exists ...
	I1219 03:05:52.789603  350034 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:52.803693  350034 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:52.803884  350034 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:05:52.803963  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.829487  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.833372  350034 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:52.833419  350034 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:05:52.833502  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.838747  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.863871  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.921723  350034 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:52.937553  350034 node_ready.go:35] waiting up to 6m0s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:52.960476  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:52.967329  350034 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:05:52.987994  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:54.352944  350034 node_ready.go:49] node "embed-certs-805185" is "Ready"
	I1219 03:05:54.352990  350034 node_ready.go:38] duration metric: took 1.41538432s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:54.353006  350034 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:54.353065  350034 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:05:54.864404  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.903892508s)
	I1219 03:05:54.864464  350034 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.897096827s)
	I1219 03:05:54.864523  350034 api_server.go:72] duration metric: took 2.115189515s to wait for apiserver process to appear ...
	I1219 03:05:54.864541  350034 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:05:54.864542  350034 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:54.864474  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.876456755s)
	I1219 03:05:54.864569  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:54.868475  350034 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:05:54.869335  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:54.869359  350034 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:55.365237  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:55.371297  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:55.371332  350034 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:53.354901  352121 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-717222" ...
	I1219 03:05:53.354983  352121 cli_runner.go:164] Run: docker start default-k8s-diff-port-717222
	I1219 03:05:53.699823  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:53.723885  352121 kic.go:430] container "default-k8s-diff-port-717222" state is running.
	I1219 03:05:53.724437  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:53.748911  352121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json ...
	I1219 03:05:53.749250  352121 machine.go:94] provisionDockerMachine start ...
	I1219 03:05:53.749328  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:53.779816  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:53.780159  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:53.780174  352121 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:05:53.780830  352121 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1219 03:05:56.928761  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-717222
	
	I1219 03:05:56.928788  352121 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-717222"
	I1219 03:05:56.928865  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:56.949113  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:56.949333  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:56.949347  352121 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-717222 && echo "default-k8s-diff-port-717222" | sudo tee /etc/hostname
	I1219 03:05:57.104743  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-717222
	
	I1219 03:05:57.104815  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.123728  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:57.124049  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:57.124072  352121 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-717222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-717222/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-717222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:05:57.273322  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:05:57.273360  352121 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 03:05:57.273404  352121 ubuntu.go:190] setting up certificates
	I1219 03:05:57.273418  352121 provision.go:84] configureAuth start
	I1219 03:05:57.273481  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:57.295306  352121 provision.go:143] copyHostCerts
	I1219 03:05:57.295369  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem, removing ...
	I1219 03:05:57.295386  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem
	I1219 03:05:57.295456  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 03:05:57.295579  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem, removing ...
	I1219 03:05:57.295589  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem
	I1219 03:05:57.295628  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 03:05:57.295782  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem, removing ...
	I1219 03:05:57.295804  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem
	I1219 03:05:57.295852  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 03:05:57.295936  352121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-717222 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-717222 localhost minikube]
	I1219 03:05:57.394982  352121 provision.go:177] copyRemoteCerts
	I1219 03:05:57.395055  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:05:57.395092  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.415801  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:57.518597  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:05:57.537101  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:05:57.555959  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:05:57.575813  352121 provision.go:87] duration metric: took 302.381293ms to configureAuth
	I1219 03:05:57.575844  352121 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:05:57.576048  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:57.576149  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.595953  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:57.596182  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:57.596200  352121 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:05:58.066496  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:05:58.066522  352121 machine.go:97] duration metric: took 4.317251418s to provisionDockerMachine
	I1219 03:05:58.066537  352121 start.go:293] postStartSetup for "default-k8s-diff-port-717222" (driver="docker")
	I1219 03:05:58.066550  352121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:05:58.066636  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:05:58.066688  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.089735  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:55.753357  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:05:55.865655  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:55.871156  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1219 03:05:55.872281  350034 api_server.go:141] control plane version: v1.34.3
	I1219 03:05:55.872314  350034 api_server.go:131] duration metric: took 1.007762314s to wait for apiserver health ...
	I1219 03:05:55.872322  350034 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:55.876404  350034 system_pods.go:59] 8 kube-system pods found
	I1219 03:05:55.876440  350034 system_pods.go:61] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:55.876448  350034 system_pods.go:61] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:55.876455  350034 system_pods.go:61] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:55.876464  350034 system_pods.go:61] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:55.876469  350034 system_pods.go:61] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:55.876474  350034 system_pods.go:61] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:55.876484  350034 system_pods.go:61] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:55.876491  350034 system_pods.go:61] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:55.876501  350034 system_pods.go:74] duration metric: took 4.173475ms to wait for pod list to return data ...
	I1219 03:05:55.876508  350034 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:55.879480  350034 default_sa.go:45] found service account: "default"
	I1219 03:05:55.879505  350034 default_sa.go:55] duration metric: took 2.991473ms for default service account to be created ...
	I1219 03:05:55.879514  350034 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:55.882762  350034 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:55.882799  350034 system_pods.go:89] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:55.882811  350034 system_pods.go:89] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:55.882822  350034 system_pods.go:89] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:55.882831  350034 system_pods.go:89] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:55.882844  350034 system_pods.go:89] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:55.882860  350034 system_pods.go:89] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:55.882873  350034 system_pods.go:89] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:55.882885  350034 system_pods.go:89] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:55.882898  350034 system_pods.go:126] duration metric: took 3.377076ms to wait for k8s-apps to be running ...
	I1219 03:05:55.882911  350034 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:55.882965  350034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:58.664447  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (2.910960851s)
	I1219 03:05:58.664545  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:58.664639  350034 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.781656633s)
	I1219 03:05:58.664656  350034 system_svc.go:56] duration metric: took 2.781742979s WaitForService to wait for kubelet
	I1219 03:05:58.664665  350034 kubeadm.go:587] duration metric: took 5.915331625s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:58.664687  350034 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:58.675538  350034 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:58.675644  350034 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:58.675679  350034 node_conditions.go:105] duration metric: took 10.98619ms to run NodePressure ...
	I1219 03:05:58.675732  350034 start.go:242] waiting for startup goroutines ...
	I1219 03:05:58.853716  350034 addons.go:500] Verifying addon dashboard=true in "embed-certs-805185"
	I1219 03:05:58.854075  350034 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:58.881488  350034 out.go:179] * Verifying dashboard addon...
	I1219 03:05:58.197271  352121 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:05:58.201133  352121 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:05:58.201178  352121 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:05:58.201190  352121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 03:05:58.201247  352121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 03:05:58.201317  352121 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem -> 85362.pem in /etc/ssl/certs
	I1219 03:05:58.201402  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:05:58.208949  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:05:58.226900  352121 start.go:296] duration metric: took 160.349121ms for postStartSetup
	I1219 03:05:58.227004  352121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:05:58.227051  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.247047  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.350362  352121 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:05:58.356036  352121 fix.go:56] duration metric: took 5.025136862s for fixHost
	I1219 03:05:58.356065  352121 start.go:83] releasing machines lock for "default-k8s-diff-port-717222", held for 5.025188602s
	I1219 03:05:58.356141  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:58.375473  352121 ssh_runner.go:195] Run: cat /version.json
	I1219 03:05:58.375532  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.375568  352121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:05:58.375641  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.397142  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.397516  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.556811  352121 ssh_runner.go:195] Run: systemctl --version
	I1219 03:05:58.565677  352121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:05:58.614666  352121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:05:58.621187  352121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:05:58.621270  352121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:05:58.633014  352121 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:05:58.633047  352121 start.go:496] detecting cgroup driver to use...
	I1219 03:05:58.633088  352121 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:05:58.633137  352121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:05:58.657800  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:05:58.678804  352121 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:05:58.678883  352121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:05:58.705625  352121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:05:58.728926  352121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:05:58.831932  352121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:05:58.929486  352121 docker.go:234] disabling docker service ...
	I1219 03:05:58.929541  352121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:05:58.944770  352121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:05:58.958306  352121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:05:59.062259  352121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:05:59.150570  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:05:59.163650  352121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:05:59.178297  352121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:05:59.178363  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.187297  352121 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 03:05:59.187364  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.198433  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.208772  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.217632  352121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:05:59.226347  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.236018  352121 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.245884  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.255295  352121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:05:59.263398  352121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:05:59.271671  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:59.372273  352121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:05:59.546574  352121 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:05:59.546667  352121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:05:59.552176  352121 start.go:564] Will wait 60s for crictl version
	I1219 03:05:59.552257  352121 ssh_runner.go:195] Run: which crictl
	I1219 03:05:59.557121  352121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:05:59.591016  352121 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 03:05:59.591106  352121 ssh_runner.go:195] Run: crio --version
	I1219 03:05:59.628329  352121 ssh_runner.go:195] Run: crio --version
	I1219 03:05:59.666833  352121 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1219 03:05:58.884349  350034 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:05:58.887370  350034 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:05:58.887396  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.389654  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.888847  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.388430  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.668307  352121 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-717222 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:05:59.692145  352121 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1219 03:05:59.697760  352121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:59.712259  352121 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:05:59.712414  352121 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:05:59.712469  352121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:59.758952  352121 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:59.758977  352121 crio.go:433] Images already preloaded, skipping extraction
	I1219 03:05:59.759041  352121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:59.795008  352121 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:59.795037  352121 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:05:59.795046  352121 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.3 crio true true} ...
	I1219 03:05:59.795178  352121 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-717222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:05:59.795259  352121 ssh_runner.go:195] Run: crio config
	I1219 03:05:59.864102  352121 cni.go:84] Creating CNI manager for ""
	I1219 03:05:59.864128  352121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:59.864150  352121 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:05:59.864179  352121 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-717222 NodeName:default-k8s-diff-port-717222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:05:59.864362  352121 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-717222"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:05:59.864462  352121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:05:59.875851  352121 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:05:59.875922  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:05:59.884464  352121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1219 03:05:59.899134  352121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:05:59.915215  352121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1219 03:05:59.929131  352121 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:05:59.933193  352121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:59.944310  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:06:00.032535  352121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:06:00.050232  352121 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222 for IP: 192.168.94.2
	I1219 03:06:00.050255  352121 certs.go:195] generating shared ca certs ...
	I1219 03:06:00.050283  352121 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.050682  352121 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 03:06:00.050862  352121 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 03:06:00.050911  352121 certs.go:257] generating profile certs ...
	I1219 03:06:00.051065  352121 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/client.key
	I1219 03:06:00.051146  352121 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.key.a5493f25
	I1219 03:06:00.051208  352121 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.key
	I1219 03:06:00.051349  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem (1338 bytes)
	W1219 03:06:00.051384  352121 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536_empty.pem, impossibly tiny 0 bytes
	I1219 03:06:00.051393  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:06:00.051430  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:06:00.051457  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:06:00.051489  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 03:06:00.051550  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:06:00.053211  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:06:00.075839  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:06:00.100173  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:06:00.124618  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 03:06:00.149956  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:06:00.170353  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:06:00.189275  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:06:00.209284  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 03:06:00.228628  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:06:00.250172  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem --> /usr/share/ca-certificates/8536.pem (1338 bytes)
	I1219 03:06:00.275836  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /usr/share/ca-certificates/85362.pem (1708 bytes)
	I1219 03:06:00.299659  352121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:06:00.316102  352121 ssh_runner.go:195] Run: openssl version
	I1219 03:06:00.322905  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.332781  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:06:00.343197  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.348016  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.348075  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.393419  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:06:00.402299  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.411729  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8536.pem /etc/ssl/certs/8536.pem
	I1219 03:06:00.420351  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.424769  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:33 /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.424832  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.477146  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:06:00.485938  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.495504  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85362.pem /etc/ssl/certs/85362.pem
	I1219 03:06:00.504981  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.509807  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:33 /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.509870  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.553357  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:06:00.563012  352121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:06:00.568396  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:06:00.622821  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:06:00.682127  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:06:00.745366  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:06:00.806418  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:06:00.854164  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:06:00.896520  352121 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:06:00.896640  352121 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:06:00.896757  352121 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:06:00.932488  352121 cri.go:92] found id: "1340a2f59347d94cb67897c14b11276bb491943caa3b94ed7080db25e5d5dc78"
	I1219 03:06:00.932513  352121 cri.go:92] found id: "725faee3812c5d3136b764571b2c9b270517680beb590d21de31aad0b5d0b89a"
	I1219 03:06:00.932519  352121 cri.go:92] found id: "d2c496c53c69616e93f161ca59c69ebaa3d4de81e94460893270a29e321886eb"
	I1219 03:06:00.932523  352121 cri.go:92] found id: "0fb4e8910a64fc9fe34f2319ccb58566695768a8c137e01b391e3004481cb992"
	I1219 03:06:00.932538  352121 cri.go:92] found id: ""
	I1219 03:06:00.932585  352121 ssh_runner.go:195] Run: sudo runc list -f json
	W1219 03:06:00.949831  352121 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:06:00Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:06:00.949907  352121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:06:00.960453  352121 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:06:00.960473  352121 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:06:00.960520  352121 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:06:00.969747  352121 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:06:00.971295  352121 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-717222" does not appear in /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:06:00.972255  352121 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-4987/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-717222" cluster setting kubeconfig missing "default-k8s-diff-port-717222" context setting]
	I1219 03:06:00.973245  352121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.975092  352121 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:06:00.986333  352121 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1219 03:06:00.986370  352121 kubeadm.go:602] duration metric: took 25.891084ms to restartPrimaryControlPlane
	I1219 03:06:00.986381  352121 kubeadm.go:403] duration metric: took 89.870495ms to StartCluster
	I1219 03:06:00.986399  352121 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.986465  352121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:06:00.988984  352121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.989885  352121 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:06:00.990020  352121 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:06:00.990126  352121 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990162  352121 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-717222"
	W1219 03:06:00.990171  352121 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:06:00.990156  352121 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990202  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:00.990217  352121 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-717222"
	W1219 03:06:00.990229  352121 addons.go:248] addon dashboard should already be in state true
	I1219 03:06:00.990249  352121 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990136  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:06:00.990260  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:00.990288  352121 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-717222"
	I1219 03:06:00.990658  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.990728  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.990961  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.993354  352121 out.go:179] * Verifying Kubernetes components...
	I1219 03:06:00.994888  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:06:01.021617  352121 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:06:01.021640  352121 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:06:01.021711  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.025911  352121 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:06:01.027271  352121 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:06:01.027318  352121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:06:01.027479  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.027961  352121 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-717222"
	W1219 03:06:01.027983  352121 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:06:01.028009  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:01.028455  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:01.058853  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.066205  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.072839  352121 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:06:01.072864  352121 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:06:01.072921  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.100032  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.171523  352121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:06:01.187709  352121 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:06:01.188379  352121 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:06:01.193383  352121 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:06:01.197836  352121 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:06:01.203342  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:06:01.235359  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:06:02.386268  352121 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.188390942s)
	I1219 03:06:02.386368  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:06:03.037231  352121 node_ready.go:49] node "default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:03.037269  352121 node_ready.go:38] duration metric: took 1.848860443s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:06:03.037285  352121 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:06:03.037340  352121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:06:00.888463  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.390478  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.889172  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.392135  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.887803  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.395356  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.887597  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.388564  350034 kapi.go:107] duration metric: took 5.504214625s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:06:04.390500  350034 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-805185 addons enable metrics-server
	
	I1219 03:06:04.392729  350034 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1219 03:06:04.394312  350034 addons.go:546] duration metric: took 11.644922855s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:06:04.394358  350034 start.go:247] waiting for cluster config update ...
	I1219 03:06:04.394374  350034 start.go:256] writing updated cluster config ...
	I1219 03:06:04.394679  350034 ssh_runner.go:195] Run: rm -f paused
	I1219 03:06:04.399689  350034 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:04.404337  350034 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:03.829353  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.625958762s)
	I1219 03:06:03.829435  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.59404762s)
	I1219 03:06:06.147413  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.761002493s)
	I1219 03:06:06.147503  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:06:06.147635  352121 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.110275887s)
	I1219 03:06:06.147788  352121 api_server.go:72] duration metric: took 5.157862117s to wait for apiserver process to appear ...
	I1219 03:06:06.147803  352121 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:06:06.147822  352121 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:06:06.158489  352121 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1219 03:06:06.160663  352121 api_server.go:141] control plane version: v1.34.3
	I1219 03:06:06.160691  352121 api_server.go:131] duration metric: took 12.881794ms to wait for apiserver health ...
	I1219 03:06:06.160726  352121 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:06:06.167594  352121 system_pods.go:59] 8 kube-system pods found
	I1219 03:06:06.167645  352121 system_pods.go:61] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:06:06.167662  352121 system_pods.go:61] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:06:06.167671  352121 system_pods.go:61] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:06:06.167680  352121 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:06:06.167689  352121 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:06:06.167695  352121 system_pods.go:61] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:06:06.167733  352121 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:06:06.167740  352121 system_pods.go:61] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:06:06.167750  352121 system_pods.go:74] duration metric: took 7.015044ms to wait for pod list to return data ...
	I1219 03:06:06.167763  352121 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:06:06.172355  352121 default_sa.go:45] found service account: "default"
	I1219 03:06:06.172383  352121 default_sa.go:55] duration metric: took 4.612941ms for default service account to be created ...
	I1219 03:06:06.172394  352121 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:06:06.266672  352121 system_pods.go:86] 8 kube-system pods found
	I1219 03:06:06.266726  352121 system_pods.go:89] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:06:06.266739  352121 system_pods.go:89] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:06:06.266748  352121 system_pods.go:89] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:06:06.266766  352121 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:06:06.266780  352121 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:06:06.266794  352121 system_pods.go:89] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:06:06.266807  352121 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:06:06.266814  352121 system_pods.go:89] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:06:06.266828  352121 system_pods.go:126] duration metric: took 94.426368ms to wait for k8s-apps to be running ...
	I1219 03:06:06.266837  352121 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:06:06.266897  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:06:06.349488  352121 system_svc.go:56] duration metric: took 82.641061ms WaitForService to wait for kubelet
	I1219 03:06:06.349525  352121 kubeadm.go:587] duration metric: took 5.359602561s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:06:06.349549  352121 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:06:06.350103  352121 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-717222"
	I1219 03:06:06.350491  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:06.365789  352121 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:06:06.365826  352121 node_conditions.go:123] node cpu capacity is 8
	I1219 03:06:06.365844  352121 node_conditions.go:105] duration metric: took 16.288389ms to run NodePressure ...
	I1219 03:06:06.365861  352121 start.go:242] waiting for startup goroutines ...
	I1219 03:06:06.386126  352121 out.go:179] * Verifying dashboard addon...
	I1219 03:06:06.389114  352121 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:06:06.393439  352121 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:06:07.395667  352121 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:06:07.395691  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.895613  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:06:06.411012  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:08.413428  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:08.393607  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.894323  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.395092  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.892933  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.393259  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.894360  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.394632  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.893492  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.393217  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.892698  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.423636  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.893633  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.392860  352121 kapi.go:107] duration metric: took 8.003743298s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:06:14.394678  352121 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-717222 addons enable metrics-server
	
	I1219 03:06:14.396056  352121 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1219 03:06:10.911668  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:13.410917  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:15.411844  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:14.397218  352121 addons.go:546] duration metric: took 13.407198611s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:06:14.397266  352121 start.go:247] waiting for cluster config update ...
	I1219 03:06:14.397281  352121 start.go:256] writing updated cluster config ...
	I1219 03:06:14.397606  352121 ssh_runner.go:195] Run: rm -f paused
	I1219 03:06:14.402992  352121 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:14.409994  352121 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	W1219 03:06:16.416124  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:17.411983  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:19.909949  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:18.416198  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:20.916272  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:21.910435  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:24.410080  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:23.415501  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:25.416077  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:27.916104  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:26.910790  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:29.410901  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:30.410731  350034 pod_ready.go:94] pod "coredns-66bc5c9577-8gphx" is "Ready"
	I1219 03:06:30.410767  350034 pod_ready.go:86] duration metric: took 26.00640693s for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.413681  350034 pod_ready.go:83] waiting for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.418650  350034 pod_ready.go:94] pod "etcd-embed-certs-805185" is "Ready"
	I1219 03:06:30.418674  350034 pod_ready.go:86] duration metric: took 4.958889ms for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.420801  350034 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.424880  350034 pod_ready.go:94] pod "kube-apiserver-embed-certs-805185" is "Ready"
	I1219 03:06:30.424905  350034 pod_ready.go:86] duration metric: took 4.082079ms for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.427435  350034 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.608611  350034 pod_ready.go:94] pod "kube-controller-manager-embed-certs-805185" is "Ready"
	I1219 03:06:30.608639  350034 pod_ready.go:86] duration metric: took 181.178673ms for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.808837  350034 pod_ready.go:83] waiting for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.208761  350034 pod_ready.go:94] pod "kube-proxy-p8pqg" is "Ready"
	I1219 03:06:31.208794  350034 pod_ready.go:86] duration metric: took 399.926497ms for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.408111  350034 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.808631  350034 pod_ready.go:94] pod "kube-scheduler-embed-certs-805185" is "Ready"
	I1219 03:06:31.808661  350034 pod_ready.go:86] duration metric: took 400.520634ms for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.808676  350034 pod_ready.go:40] duration metric: took 27.408945135s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:31.854031  350034 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:06:31.855666  350034 out.go:179] * Done! kubectl is now configured to use "embed-certs-805185" cluster and "default" namespace by default
	W1219 03:06:29.916349  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:31.916589  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:34.416790  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:36.915948  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	I1219 03:06:37.416133  352121 pod_ready.go:94] pod "coredns-66bc5c9577-dskxl" is "Ready"
	I1219 03:06:37.416166  352121 pod_ready.go:86] duration metric: took 23.006142155s for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.418804  352121 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.423037  352121 pod_ready.go:94] pod "etcd-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.423066  352121 pod_ready.go:86] duration metric: took 4.235839ms for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.425227  352121 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.429099  352121 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.429133  352121 pod_ready.go:86] duration metric: took 3.885497ms for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.430894  352121 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.614208  352121 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.614234  352121 pod_ready.go:86] duration metric: took 183.319695ms for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.814884  352121 pod_ready.go:83] waiting for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.214236  352121 pod_ready.go:94] pod "kube-proxy-mr7c8" is "Ready"
	I1219 03:06:38.214264  352121 pod_ready.go:86] duration metric: took 399.351737ms for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.413883  352121 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.813953  352121 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:38.813980  352121 pod_ready.go:86] duration metric: took 400.070957ms for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.813992  352121 pod_ready.go:40] duration metric: took 24.410965975s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:38.857548  352121 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:06:38.860155  352121 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-717222" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.294882463Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2" id=b149fd9d-fd72-4e11-adb2-25e489e6bf82 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.296980103Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2/kubernetes-dashboard-metrics-scraper" id=19f10fdd-5916-41c8-a26a-2ff94d29c5dc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.297143775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.301522987Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.302174856Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.312945079Z" level=info msg="Created container 1a79f7aa9ddca8538f5c2e6b027ef80376da81a39357bcf46549ad8c98b71cf4: kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn/proxy" id=8f1aef5d-9910-4677-95e2-3ddd26dbad0a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.31363451Z" level=info msg="Starting container: 1a79f7aa9ddca8538f5c2e6b027ef80376da81a39357bcf46549ad8c98b71cf4" id=8be5d998-255e-4a4b-8020-90d42f9fb352 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.316425544Z" level=info msg="Started container" PID=1962 containerID=1a79f7aa9ddca8538f5c2e6b027ef80376da81a39357bcf46549ad8c98b71cf4 description=kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn/proxy id=8be5d998-255e-4a4b-8020-90d42f9fb352 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5b4016d036c099501205c1263d738aec355ca9ba0985ac0de1a6326f1ba60f4f
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.32575797Z" level=info msg="Created container 9757437ad1c1d294af43014db84d4705543c3ebf727fe8d8aa2bcc64dab4ebe2: kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2/kubernetes-dashboard-metrics-scraper" id=19f10fdd-5916-41c8-a26a-2ff94d29c5dc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.326784172Z" level=info msg="Starting container: 9757437ad1c1d294af43014db84d4705543c3ebf727fe8d8aa2bcc64dab4ebe2" id=c58dd769-fa87-40be-a2b3-85b8605bc579 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.329520518Z" level=info msg="Started container" PID=1967 containerID=9757437ad1c1d294af43014db84d4705543c3ebf727fe8d8aa2bcc64dab4ebe2 description=kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2/kubernetes-dashboard-metrics-scraper id=c58dd769-fa87-40be-a2b3-85b8605bc579 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b5b7f0901c4eba07cb72103c3ef6c2da1dd3e8c1ae0cbe501ab5646ede4e16ae
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.151028864Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cbd04026-4973-4fb2-a2f5-e1a0bcef1d04 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.152401329Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5d524859-0cd0-482d-8890-c3a0b5bfcadf name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.153497878Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=476303b0-0df5-44b1-9467-56549e89f9fa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.153634163Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.15821817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.158364577Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/fc5e20521cad9e4c78b487769b301845071ca906b069eb3bccc1e9a717925eaf/merged/etc/passwd: no such file or directory"
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.1583869Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fc5e20521cad9e4c78b487769b301845071ca906b069eb3bccc1e9a717925eaf/merged/etc/group: no such file or directory"
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.158596016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.189477862Z" level=info msg="Created container b58c35740f2bd58f8c06629e5842dc5bfb5070615f14eabac5641c2021fc5622: kube-system/storage-provisioner/storage-provisioner" id=476303b0-0df5-44b1-9467-56549e89f9fa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.190263305Z" level=info msg="Starting container: b58c35740f2bd58f8c06629e5842dc5bfb5070615f14eabac5641c2021fc5622" id=094675cc-8230-4bbb-9611-af75aa9fbdb9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.192298533Z" level=info msg="Started container" PID=3386 containerID=b58c35740f2bd58f8c06629e5842dc5bfb5070615f14eabac5641c2021fc5622 description=kube-system/storage-provisioner/storage-provisioner id=094675cc-8230-4bbb-9611-af75aa9fbdb9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0546164e8f444b2265480d306eeac5a7944c866d22f7a7daa5d4a8a97d59bd1
	Dec 19 03:10:06 old-k8s-version-433330 crio[565]: time="2025-12-19T03:10:06.979473429Z" level=info msg="Checking image status: registry.k8s.io/pause:3.9" id=88b602ee-9bb9-4765-ba4b-8f37a46dfeb9 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:15:06 old-k8s-version-433330 crio[565]: time="2025-12-19T03:15:06.983672919Z" level=info msg="Checking image status: registry.k8s.io/pause:3.9" id=3ec44638-8e03-4c18-8174-4cc031367aa5 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:20:06 old-k8s-version-433330 crio[565]: time="2025-12-19T03:20:06.987998685Z" level=info msg="Checking image status: registry.k8s.io/pause:3.9" id=691148af-39d8-427d-99b8-393bcb276786 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	b58c35740f2bd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Running             storage-provisioner                    1                   c0546164e8f44       storage-provisioner                                     kube-system
	9757437ad1c1d       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   18 minutes ago      Running             kubernetes-dashboard-metrics-scraper   0                   b5b7f0901c4eb       kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2   kubernetes-dashboard
	1a79f7aa9ddca       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                           18 minutes ago      Running             proxy                                  0                   5b4016d036c09       kubernetes-dashboard-kong-f487b85cd-7vrxn               kubernetes-dashboard
	43a7239d34381       docker.io/library/kong@sha256:73ac10ce4d2c5b3b8b4acd6c8117b4e72d1a201d95be2d51aeae8324d776a108                             18 minutes ago      Exited              clear-stale-pid                        0                   5b4016d036c09       kubernetes-dashboard-kong-f487b85cd-7vrxn               kubernetes-dashboard
	c787e566a1357       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              18 minutes ago      Running             kubernetes-dashboard-auth              0                   1b21bd00ecbe5       kubernetes-dashboard-auth-96f55cbc9-q6w55               kubernetes-dashboard
	572a9a98a5b17       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               18 minutes ago      Running             kubernetes-dashboard-api               0                   2598675df2023       kubernetes-dashboard-api-6c85dd6d79-gplb7               kubernetes-dashboard
	162ae6553f9ec       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               18 minutes ago      Running             kubernetes-dashboard-web               0                   1832855b57889       kubernetes-dashboard-web-858bd7466-nt8k8                kubernetes-dashboard
	8040658b9f3ea       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                           18 minutes ago      Running             coredns                                0                   c68d596bc4c32       coredns-5dd5756b68-vp79f                                kube-system
	e0cd612dc1ee9       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                                           18 minutes ago      Running             busybox                                1                   a960ed231cfff       busybox                                                 default
	9243551aa2fc1       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                                           18 minutes ago      Running             kindnet-cni                            0                   83c7dbba43d07       kindnet-hm2sz                                           kube-system
	9a529209e91c7       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                                           18 minutes ago      Running             kube-proxy                             0                   2bfa6386c24f2       kube-proxy-wdrk8                                        kube-system
	4a2a86182d6e0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Exited              storage-provisioner                    0                   c0546164e8f44       storage-provisioner                                     kube-system
	ba54120ef227f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                           18 minutes ago      Running             etcd                                   0                   e4fbd268e41d9       etcd-old-k8s-version-433330                             kube-system
	dca7ec4a11ad9       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                                           18 minutes ago      Running             kube-controller-manager                0                   2ebbf830bac83       kube-controller-manager-old-k8s-version-433330          kube-system
	6764bc2ee8b6d       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                                           18 minutes ago      Running             kube-scheduler                         0                   b8ce7eb1e0991       kube-scheduler-old-k8s-version-433330                   kube-system
	e80d5d62bfdcc       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                                           18 minutes ago      Running             kube-apiserver                         0                   5a193f007e64f       kube-apiserver-old-k8s-version-433330                   kube-system
	
	
	==> coredns [8040658b9f3eabbbb5ba47f09aef99df774aaa0bfb4e998b1fb75e4a9d80669f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41940 - 34117 "HINFO IN 2692397503380385834.233192437307976356. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.044493269s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-433330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-433330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=old-k8s-version-433330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_04_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:03:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-433330
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:23:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:21:00 +0000   Fri, 19 Dec 2025 03:03:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:21:00 +0000   Fri, 19 Dec 2025 03:03:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:21:00 +0000   Fri, 19 Dec 2025 03:03:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:21:00 +0000   Fri, 19 Dec 2025 03:04:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-433330
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                51a7519b-85cf-4ec7-8319-8a51b3632490
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-5dd5756b68-vp79f                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-old-k8s-version-433330                              100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-hm2sz                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-old-k8s-version-433330                    250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-old-k8s-version-433330           200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-wdrk8                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-old-k8s-version-433330                    100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        kubernetes-dashboard-api-6c85dd6d79-gplb7                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-auth-96f55cbc9-q6w55                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-kong-f487b85cd-7vrxn                0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-web-858bd7466-nt8k8                 100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1250m (15%)  1100m (13%)
	  memory             1020Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node old-k8s-version-433330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x8 over 19m)  kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node old-k8s-version-433330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m                kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           19m                node-controller  Node old-k8s-version-433330 event: Registered Node old-k8s-version-433330 in Controller
	  Normal  NodeReady                19m                kubelet          Node old-k8s-version-433330 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node old-k8s-version-433330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node old-k8s-version-433330 event: Registered Node old-k8s-version-433330 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [ba54120ef227f793b4872b17bf830934ba52ead424688a987fd2abd98fa2cc0e] <==
	{"level":"info","ts":"2025-12-19T03:05:23.716798Z","caller":"traceutil/trace.go:171","msg":"trace[1286006389] transaction","detail":"{read_only:false; response_revision:637; number_of_response:1; }","duration":"111.325708ms","start":"2025-12-19T03:05:23.605446Z","end":"2025-12-19T03:05:23.716772Z","steps":["trace[1286006389] 'process raft request'  (duration: 111.154063ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.762567Z","caller":"traceutil/trace.go:171","msg":"trace[1170228424] transaction","detail":"{read_only:false; response_revision:638; number_of_response:1; }","duration":"156.711191ms","start":"2025-12-19T03:05:23.605773Z","end":"2025-12-19T03:05:23.762484Z","steps":["trace[1170228424] 'process raft request'  (duration: 156.477047ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.76258Z","caller":"traceutil/trace.go:171","msg":"trace[176958629] transaction","detail":"{read_only:false; response_revision:640; number_of_response:1; }","duration":"155.653437ms","start":"2025-12-19T03:05:23.606903Z","end":"2025-12-19T03:05:23.762556Z","steps":["trace[176958629] 'process raft request'  (duration: 155.495851ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.762606Z","caller":"traceutil/trace.go:171","msg":"trace[11901299] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"154.359966ms","start":"2025-12-19T03:05:23.608234Z","end":"2025-12-19T03:05:23.762594Z","steps":["trace[11901299] 'process raft request'  (duration: 154.193134ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:05:23.762855Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.14879ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:05:23.76292Z","caller":"traceutil/trace.go:171","msg":"trace[491680101] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:641; }","duration":"100.257204ms","start":"2025-12-19T03:05:23.662645Z","end":"2025-12-19T03:05:23.762902Z","steps":["trace[491680101] 'agreement among raft nodes before linearized reading'  (duration: 100.093535ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.763103Z","caller":"traceutil/trace.go:171","msg":"trace[1326394039] transaction","detail":"{read_only:false; response_revision:639; number_of_response:1; }","duration":"156.274686ms","start":"2025-12-19T03:05:23.606816Z","end":"2025-12-19T03:05:23.763091Z","steps":["trace[1326394039] 'process raft request'  (duration: 155.543051ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.926373Z","caller":"traceutil/trace.go:171","msg":"trace[923941046] linearizableReadLoop","detail":"{readStateIndex:670; appliedIndex:668; }","duration":"163.896791ms","start":"2025-12-19T03:05:23.762458Z","end":"2025-12-19T03:05:23.926354Z","steps":["trace[923941046] 'read index received'  (duration: 90.331723ms)","trace[923941046] 'applied index is now lower than readState.Index'  (duration: 73.564544ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:05:23.926443Z","caller":"traceutil/trace.go:171","msg":"trace[1947040731] transaction","detail":"{read_only:false; response_revision:642; number_of_response:1; }","duration":"205.888361ms","start":"2025-12-19T03:05:23.720531Z","end":"2025-12-19T03:05:23.926419Z","steps":["trace[1947040731] 'process raft request'  (duration: 132.202751ms)","trace[1947040731] 'compare'  (duration: 73.474481ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:05:23.92647Z","caller":"traceutil/trace.go:171","msg":"trace[719632072] transaction","detail":"{read_only:false; response_revision:643; number_of_response:1; }","duration":"203.800384ms","start":"2025-12-19T03:05:23.722655Z","end":"2025-12-19T03:05:23.926455Z","steps":["trace[719632072] 'process raft request'  (duration: 203.652153ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:05:23.926492Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.716096ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kubernetes-dashboard/\" range_end:\"/registry/limitranges/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:05:23.926529Z","caller":"traceutil/trace.go:171","msg":"trace[291568890] range","detail":"{range_begin:/registry/limitranges/kubernetes-dashboard/; range_end:/registry/limitranges/kubernetes-dashboard0; response_count:0; response_revision:643; }","duration":"204.766821ms","start":"2025-12-19T03:05:23.721752Z","end":"2025-12-19T03:05:23.926519Z","steps":["trace[291568890] 'agreement among raft nodes before linearized reading'  (duration: 204.695193ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.950771Z","caller":"traceutil/trace.go:171","msg":"trace[910369966] transaction","detail":"{read_only:false; response_revision:645; number_of_response:1; }","duration":"179.377478ms","start":"2025-12-19T03:05:23.77138Z","end":"2025-12-19T03:05:23.950757Z","steps":["trace[910369966] 'process raft request'  (duration: 179.260563ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.950784Z","caller":"traceutil/trace.go:171","msg":"trace[4968190] transaction","detail":"{read_only:false; response_revision:644; number_of_response:1; }","duration":"179.447416ms","start":"2025-12-19T03:05:23.771287Z","end":"2025-12-19T03:05:23.950734Z","steps":["trace[4968190] 'process raft request'  (duration: 179.24612ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.951094Z","caller":"traceutil/trace.go:171","msg":"trace[108964002] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"179.505335ms","start":"2025-12-19T03:05:23.771577Z","end":"2025-12-19T03:05:23.951082Z","steps":["trace[108964002] 'process raft request'  (duration: 179.104746ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.951137Z","caller":"traceutil/trace.go:171","msg":"trace[652577346] transaction","detail":"{read_only:false; response_revision:647; number_of_response:1; }","duration":"176.993248ms","start":"2025-12-19T03:05:23.774131Z","end":"2025-12-19T03:05:23.951124Z","steps":["trace[652577346] 'process raft request'  (duration: 176.75032ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:05:23.951195Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.30836ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:05:23.951226Z","caller":"traceutil/trace.go:171","msg":"trace[1368537699] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:647; }","duration":"183.528611ms","start":"2025-12-19T03:05:23.767688Z","end":"2025-12-19T03:05:23.951216Z","steps":["trace[1368537699] 'agreement among raft nodes before linearized reading'  (duration: 183.469758ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:34.236332Z","caller":"traceutil/trace.go:171","msg":"trace[532828479] transaction","detail":"{read_only:false; response_revision:733; number_of_response:1; }","duration":"124.186623ms","start":"2025-12-19T03:05:34.112115Z","end":"2025-12-19T03:05:34.236302Z","steps":["trace[532828479] 'process raft request'  (duration: 124.016196ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:15:09.13417Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":975}
	{"level":"info","ts":"2025-12-19T03:15:09.136009Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":975,"took":"1.560442ms","hash":2911588948}
	{"level":"info","ts":"2025-12-19T03:15:09.13606Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2911588948,"revision":975,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:20:09.140625Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1214}
	{"level":"info","ts":"2025-12-19T03:20:09.141731Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1214,"took":"808.598µs","hash":1219419124}
	{"level":"info","ts":"2025-12-19T03:20:09.141763Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1219419124,"revision":1214,"compact-revision":975}
	
	
	==> kernel <==
	 03:23:49 up  1:06,  0 user,  load average: 0.46, 0.52, 1.14
	Linux old-k8s-version-433330 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9243551aa2fc190ba88910a889833487b7b7643a56fe7be4cf6f3b57fe4c6fb8] <==
	I1219 03:21:41.944385       1 main.go:301] handling current node
	I1219 03:21:51.948891       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:21:51.948941       1 main.go:301] handling current node
	I1219 03:22:01.952938       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:22:01.952967       1 main.go:301] handling current node
	I1219 03:22:11.944344       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:22:11.944378       1 main.go:301] handling current node
	I1219 03:22:21.943891       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:22:21.943924       1 main.go:301] handling current node
	I1219 03:22:31.951661       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:22:31.951736       1 main.go:301] handling current node
	I1219 03:22:41.944791       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:22:41.944829       1 main.go:301] handling current node
	I1219 03:22:51.946611       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:22:51.946640       1 main.go:301] handling current node
	I1219 03:23:01.952537       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:23:01.952570       1 main.go:301] handling current node
	I1219 03:23:11.946080       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:23:11.946118       1 main.go:301] handling current node
	I1219 03:23:21.947459       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:23:21.947498       1 main.go:301] handling current node
	I1219 03:23:31.952163       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:23:31.952196       1 main.go:301] handling current node
	I1219 03:23:41.944118       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:23:41.944176       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e80d5d62bfdccfdfe3b1ea4a0e9ada6c25b66f5f5a6a80697856ea062adbe100] <==
	I1219 03:10:10.583004       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:10:10.583125       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:10:10.583190       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:10:10.583344       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:10:10.583413       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:10:10.583485       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:10:10.583543       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:10:10.583600       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:10:10.583658       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:10:10.583735       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:15:10.584188       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:15:10.584289       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:15:10.584352       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:15:10.584448       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:15:10.584519       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:15:10.584743       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:15:10.584830       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:15:10.584915       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:15:10.584995       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:15:10.585051       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:15:10.585117       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:20:10.584921       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:20:10.585580       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:20:10.585664       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:20:10.585744       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	
	
	==> kube-controller-manager [dca7ec4a11ad9581ca73f7dc0a44569b981b375142761b6af7e7d3724cdbe386] <==
	I1219 03:05:29.137946       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-api-6c85dd6d79" duration="10.500982ms"
	I1219 03:05:29.138772       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-api-6c85dd6d79" duration="222.04µs"
	I1219 03:05:30.139560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-auth-96f55cbc9" duration="10.738008ms"
	I1219 03:05:30.141370       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-auth-96f55cbc9" duration="230.761µs"
	I1219 03:05:35.145518       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd" duration="124.735µs"
	I1219 03:05:36.153341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479" duration="7.771826ms"
	I1219 03:05:36.153487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479" duration="81.765µs"
	I1219 03:05:36.161499       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd" duration="142.354µs"
	I1219 03:05:44.124877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.783071ms"
	I1219 03:05:44.124969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.031µs"
	I1219 03:05:44.322554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd" duration="10.292955ms"
	I1219 03:05:44.322813       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd" duration="137.021µs"
	I1219 03:05:53.457987       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongplugins.configuration.konghq.com"
	I1219 03:05:53.458044       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongupstreampolicies.configuration.konghq.com"
	I1219 03:05:53.458064       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="tcpingresses.configuration.konghq.com"
	I1219 03:05:53.458080       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="ingressclassparameterses.configuration.konghq.com"
	I1219 03:05:53.458106       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongconsumers.configuration.konghq.com"
	I1219 03:05:53.458129       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongingresses.configuration.konghq.com"
	I1219 03:05:53.458159       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="udpingresses.configuration.konghq.com"
	I1219 03:05:53.458185       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongconsumergroups.configuration.konghq.com"
	I1219 03:05:53.458213       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongcustomentities.configuration.konghq.com"
	I1219 03:05:53.458314       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1219 03:05:53.658752       1 shared_informer.go:318] Caches are synced for resource quota
	I1219 03:05:53.873190       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1219 03:05:53.973659       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-proxy [9a529209e91c73dd4753533a0684e92d16e448ef5aed74f450f82867ec17304c] <==
	I1219 03:05:11.436432       1 server_others.go:69] "Using iptables proxy"
	I1219 03:05:11.452009       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1219 03:05:11.479225       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:05:11.482560       1 server_others.go:152] "Using iptables Proxier"
	I1219 03:05:11.482604       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1219 03:05:11.482625       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1219 03:05:11.482679       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1219 03:05:11.483072       1 server.go:846] "Version info" version="v1.28.0"
	I1219 03:05:11.483108       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:11.485106       1 config.go:97] "Starting endpoint slice config controller"
	I1219 03:05:11.485126       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1219 03:05:11.485951       1 config.go:315] "Starting node config controller"
	I1219 03:05:11.486004       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1219 03:05:11.485951       1 config.go:188] "Starting service config controller"
	I1219 03:05:11.486179       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1219 03:05:11.585764       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1219 03:05:11.587020       1 shared_informer.go:318] Caches are synced for node config
	I1219 03:05:11.587059       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [6764bc2ee8b6d353f059a22bdc0c53e92e10b6672c29a9f618bd8d9812741d3b] <==
	I1219 03:05:08.072216       1 serving.go:348] Generated self-signed cert in-memory
	W1219 03:05:10.585445       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:05:10.585508       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:05:10.585524       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:05:10.585535       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:05:10.628537       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1219 03:05:10.628629       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:10.631418       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1219 03:05:10.631571       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:10.633792       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1219 03:05:10.631594       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1219 03:05:10.734781       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062051     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmzc2\" (UniqueName: \"kubernetes.io/projected/970184f3-748e-4083-93e1-27215e7d3544-kube-api-access-hmzc2\") pod \"kubernetes-dashboard-api-6c85dd6d79-gplb7\" (UID: \"970184f3-748e-4083-93e1-27215e7d3544\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-6c85dd6d79-gplb7"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062114     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jp56\" (UniqueName: \"kubernetes.io/projected/c53e26af-d9fd-4efc-9354-3b3e505b50f1-kube-api-access-7jp56\") pod \"kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2\" (UID: \"c53e26af-d9fd-4efc-9354-3b3e505b50f1\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062154     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5d10317b-526d-41f3-8584-7612a5cbf9ef-tmp-volume\") pod \"kubernetes-dashboard-web-858bd7466-nt8k8\" (UID: \"5d10317b-526d-41f3-8584-7612a5cbf9ef\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-858bd7466-nt8k8"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062245     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f28l2\" (UniqueName: \"kubernetes.io/projected/5d10317b-526d-41f3-8584-7612a5cbf9ef-kube-api-access-f28l2\") pod \"kubernetes-dashboard-web-858bd7466-nt8k8\" (UID: \"5d10317b-526d-41f3-8584-7612a5cbf9ef\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-858bd7466-nt8k8"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062320     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c53e26af-d9fd-4efc-9354-3b3e505b50f1-tmp-volume\") pod \"kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2\" (UID: \"c53e26af-d9fd-4efc-9354-3b3e505b50f1\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062411     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrwqf\" (UniqueName: \"kubernetes.io/projected/583637fe-b99f-4b55-8173-e40ef125a4da-kube-api-access-lrwqf\") pod \"kubernetes-dashboard-auth-96f55cbc9-q6w55\" (UID: \"583637fe-b99f-4b55-8173-e40ef125a4da\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-96f55cbc9-q6w55"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062450     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-prefix-dir\" (UniqueName: \"kubernetes.io/empty-dir/b6d4ec1c-37c9-45af-84aa-2246d710edf0-kubernetes-dashboard-kong-prefix-dir\") pod \"kubernetes-dashboard-kong-f487b85cd-7vrxn\" (UID: \"b6d4ec1c-37c9-45af-84aa-2246d710edf0\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062475     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6d4ec1c-37c9-45af-84aa-2246d710edf0-kubernetes-dashboard-kong-tmp\") pod \"kubernetes-dashboard-kong-f487b85cd-7vrxn\" (UID: \"b6d4ec1c-37c9-45af-84aa-2246d710edf0\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062493     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/970184f3-748e-4083-93e1-27215e7d3544-tmp-volume\") pod \"kubernetes-dashboard-api-6c85dd6d79-gplb7\" (UID: \"970184f3-748e-4083-93e1-27215e7d3544\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-6c85dd6d79-gplb7"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062547     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/583637fe-b99f-4b55-8173-e40ef125a4da-tmp-volume\") pod \"kubernetes-dashboard-auth-96f55cbc9-q6w55\" (UID: \"583637fe-b99f-4b55-8173-e40ef125a4da\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-96f55cbc9-q6w55"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062611     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kong-custom-dbless-config-volume\" (UniqueName: \"kubernetes.io/configmap/b6d4ec1c-37c9-45af-84aa-2246d710edf0-kong-custom-dbless-config-volume\") pod \"kubernetes-dashboard-kong-f487b85cd-7vrxn\" (UID: \"b6d4ec1c-37c9-45af-84aa-2246d710edf0\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn"
	Dec 19 03:05:27 old-k8s-version-433330 kubelet[727]: I1219 03:05:27.257035     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:27 old-k8s-version-433330 kubelet[727]: I1219 03:05:27.257133     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:28 old-k8s-version-433330 kubelet[727]: I1219 03:05:28.110504     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-web-858bd7466-nt8k8" podStartSLOduration=2.1424808889999998 podCreationTimestamp="2025-12-19 03:05:23 +0000 UTC" firstStartedPulling="2025-12-19 03:05:24.288808133 +0000 UTC m=+17.406061880" lastFinishedPulling="2025-12-19 03:05:27.256749566 +0000 UTC m=+20.374003326" observedRunningTime="2025-12-19 03:05:28.109420313 +0000 UTC m=+21.226674073" watchObservedRunningTime="2025-12-19 03:05:28.110422335 +0000 UTC m=+21.227676096"
	Dec 19 03:05:28 old-k8s-version-433330 kubelet[727]: I1219 03:05:28.215638     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:28 old-k8s-version-433330 kubelet[727]: I1219 03:05:28.215739     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:29 old-k8s-version-433330 kubelet[727]: I1219 03:05:29.086317     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:29 old-k8s-version-433330 kubelet[727]: I1219 03:05:29.086398     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:30 old-k8s-version-433330 kubelet[727]: I1219 03:05:30.129411     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-api-6c85dd6d79-gplb7" podStartSLOduration=3.221513351 podCreationTimestamp="2025-12-19 03:05:23 +0000 UTC" firstStartedPulling="2025-12-19 03:05:24.307578658 +0000 UTC m=+17.424832408" lastFinishedPulling="2025-12-19 03:05:28.215417358 +0000 UTC m=+21.332671100" observedRunningTime="2025-12-19 03:05:29.130889061 +0000 UTC m=+22.248142823" watchObservedRunningTime="2025-12-19 03:05:30.129352043 +0000 UTC m=+23.246605805"
	Dec 19 03:05:30 old-k8s-version-433330 kubelet[727]: I1219 03:05:30.130193     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-auth-96f55cbc9-q6w55" podStartSLOduration=2.356310917 podCreationTimestamp="2025-12-19 03:05:23 +0000 UTC" firstStartedPulling="2025-12-19 03:05:24.31224463 +0000 UTC m=+17.429498372" lastFinishedPulling="2025-12-19 03:05:29.086067921 +0000 UTC m=+22.203321673" observedRunningTime="2025-12-19 03:05:30.128668409 +0000 UTC m=+23.245922169" watchObservedRunningTime="2025-12-19 03:05:30.130134218 +0000 UTC m=+23.247387978"
	Dec 19 03:05:35 old-k8s-version-433330 kubelet[727]: I1219 03:05:35.294232     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:35 old-k8s-version-433330 kubelet[727]: I1219 03:05:35.294310     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:36 old-k8s-version-433330 kubelet[727]: I1219 03:05:36.145317     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2" podStartSLOduration=2.170852672 podCreationTimestamp="2025-12-19 03:05:23 +0000 UTC" firstStartedPulling="2025-12-19 03:05:24.319586522 +0000 UTC m=+17.436840275" lastFinishedPulling="2025-12-19 03:05:35.293995871 +0000 UTC m=+28.411249625" observedRunningTime="2025-12-19 03:05:36.145033222 +0000 UTC m=+29.262286982" watchObservedRunningTime="2025-12-19 03:05:36.145262022 +0000 UTC m=+29.262515784"
	Dec 19 03:05:36 old-k8s-version-433330 kubelet[727]: I1219 03:05:36.161013     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn" podStartSLOduration=2.986982841 podCreationTimestamp="2025-12-19 03:05:23 +0000 UTC" firstStartedPulling="2025-12-19 03:05:24.31920326 +0000 UTC m=+17.436457054" lastFinishedPulling="2025-12-19 03:05:34.493165004 +0000 UTC m=+27.610418746" observedRunningTime="2025-12-19 03:05:36.16087964 +0000 UTC m=+29.278133404" watchObservedRunningTime="2025-12-19 03:05:36.160944533 +0000 UTC m=+29.278198294"
	Dec 19 03:05:42 old-k8s-version-433330 kubelet[727]: I1219 03:05:42.150477     727 scope.go:117] "RemoveContainer" containerID="4a2a86182d6e0257a9f9f666cddbee5bfd7ae7855be029a79247541fbec34a4d"
	
	
	==> kubernetes-dashboard [162ae6553f9ecd77ec53fa67c114215b76b2baa657cb81e65ce4554f0273417d] <==
	I1219 03:05:27.332655       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:05:27.393367       1 init.go:48] Using in-cluster config
	I1219 03:05:27.393589       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [572a9a98a5b172d9fdd8fb35ee68c37988f906c342e70e89e9bc008440b9c471] <==
	I1219 03:05:28.320430       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:05:28.320512       1 init.go:49] Using in-cluster config
	I1219 03:05:28.320694       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:05:28.320747       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:05:28.320756       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:05:28.320762       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:05:28.327903       1 main.go:119] "Successful initial request to the apiserver" version="v1.28.0"
	I1219 03:05:28.327931       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:05:28.332767       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 03:05:28.336184       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	I1219 03:05:58.341672       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [9757437ad1c1d294af43014db84d4705543c3ebf727fe8d8aa2bcc64dab4ebe2] <==
	10.244.0.1 - - [19/Dec/2025:03:21:04 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:21:14 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:21:24 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:21:28 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:21:34 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:21:44 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:21:54 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:21:58 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:04 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:22:14 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:22:24 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:22:28 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:34 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:22:44 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:22:54 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:22:58 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:04 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:23:14 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:23:24 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:23:28 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:34 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:23:44 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	E1219 03:21:35.368770       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:22:35.366075       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:23:35.366592       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	
	
	==> kubernetes-dashboard [c787e566a13574b826852efdc66cad3575fec6831fdc3a03d85531ffb5a730c9] <==
	I1219 03:05:29.223480       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:05:29.223546       1 init.go:49] Using in-cluster config
	I1219 03:05:29.223660       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [4a2a86182d6e0257a9f9f666cddbee5bfd7ae7855be029a79247541fbec34a4d] <==
	I1219 03:05:11.393839       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:05:41.397217       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b58c35740f2bd58f8c06629e5842dc5bfb5070615f14eabac5641c2021fc5622] <==
	I1219 03:05:42.205301       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1219 03:05:42.214869       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1219 03:05:42.214917       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1219 03:05:59.616530       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1219 03:05:59.616620       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eca1d2cd-fec8-4561-9433-a93751f8f3f7", APIVersion:"v1", ResourceVersion:"774", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-433330_fa64f0b5-7eb2-42bf-8fc2-3317af087ee3 became leader
	I1219 03:05:59.616726       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-433330_fa64f0b5-7eb2-42bf-8fc2-3317af087ee3!
	I1219 03:05:59.716964       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-433330_fa64f0b5-7eb2-42bf-8fc2-3317af087ee3!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-433330 -n old-k8s-version-433330
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-433330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (542.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 03:15:02.345459    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-278042 -n no-preload-278042
start_stop_delete_test.go:285: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:23:52.546483219 +0000 UTC m=+3542.266002607
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-278042 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-278042 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (61.40041ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "dashboard-metrics-scraper" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-278042 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-278042
helpers_test.go:244: (dbg) docker inspect no-preload-278042:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35",
	        "Created": "2025-12-19T03:03:43.244016686Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 339111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:05:01.069592419Z",
	            "FinishedAt": "2025-12-19T03:05:00.08601805Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35/hostname",
	        "HostsPath": "/var/lib/docker/containers/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35/hosts",
	        "LogPath": "/var/lib/docker/containers/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35-json.log",
	        "Name": "/no-preload-278042",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-278042:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-278042",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35",
	                "LowerDir": "/var/lib/docker/overlay2/1fa6da707f89a44d8d74b00c5a1e05754941243c2c7fe2df12c57972e765ab6a-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1fa6da707f89a44d8d74b00c5a1e05754941243c2c7fe2df12c57972e765ab6a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1fa6da707f89a44d8d74b00c5a1e05754941243c2c7fe2df12c57972e765ab6a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1fa6da707f89a44d8d74b00c5a1e05754941243c2c7fe2df12c57972e765ab6a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-278042",
	                "Source": "/var/lib/docker/volumes/no-preload-278042/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-278042",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-278042",
	                "name.minikube.sigs.k8s.io": "no-preload-278042",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "86d771358686193a8ee27ccd7dd8113a32471ee83b7a9b27de2361ca35da19bf",
	            "SandboxKey": "/var/run/docker/netns/86d771358686",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-278042": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "40e663ebb9c92fe8e9b5d1c06f073100d83df79efa76e295e52399b291babbbc",
	                    "EndpointID": "8aa1f1b0831c873e8bd4b8eb538f83b636c1962501683e75418947d1eb28c78e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "7e:f0:a4:c4:bd:57",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-278042",
	                        "c49a965a7d8d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-278042 -n no-preload-278042
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-278042 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-278042 logs -n 25: (1.303386469s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-821749 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo crio config                                                                                                                                                                                                     │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ delete  │ -p custom-flannel-821749                                                                                                                                                                                                                      │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-433330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-278042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ delete  │ -p disable-driver-mounts-507648                                                                                                                                                                                                               │ disable-driver-mounts-507648 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ stop    │ -p old-k8s-version-433330 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-278042 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-433330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-433330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p no-preload-278042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-278042 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p embed-certs-805185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p embed-certs-805185 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-717222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-717222 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-805185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-717222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ image   │ old-k8s-version-433330 image list --format=json                                                                                                                                                                                               │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p old-k8s-version-433330 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:05:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:05:53.092301  352121 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:05:53.092394  352121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:53.092398  352121 out.go:374] Setting ErrFile to fd 2...
	I1219 03:05:53.092402  352121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:53.092674  352121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:05:53.093206  352121 out.go:368] Setting JSON to false
	I1219 03:05:53.094527  352121 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2904,"bootTime":1766110649,"procs":396,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:05:53.094626  352121 start.go:143] virtualization: kvm guest
	I1219 03:05:53.096521  352121 out.go:179] * [default-k8s-diff-port-717222] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:05:53.097989  352121 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:05:53.098033  352121 notify.go:221] Checking for updates...
	I1219 03:05:53.100179  352121 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:05:53.101370  352121 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:53.102535  352121 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:05:53.104239  352121 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:05:53.105473  352121 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:05:53.107110  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:53.107760  352121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:05:53.140217  352121 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:05:53.140357  352121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:53.211137  352121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 03:05:53.198937136 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:53.211231  352121 docker.go:319] overlay module found
	I1219 03:05:53.212919  352121 out.go:179] * Using the docker driver based on existing profile
	I1219 03:05:53.214070  352121 start.go:309] selected driver: docker
	I1219 03:05:53.214085  352121 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:53.214190  352121 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:05:53.214819  352121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:53.291099  352121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 03:05:53.277402722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:53.291489  352121 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:53.291541  352121 cni.go:84] Creating CNI manager for ""
	I1219 03:05:53.291616  352121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:53.291670  352121 start.go:353] cluster config:
	{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:53.293313  352121 out.go:179] * Starting "default-k8s-diff-port-717222" primary control-plane node in "default-k8s-diff-port-717222" cluster
	I1219 03:05:53.300833  352121 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:05:53.301994  352121 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:05:53.303248  352121 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:05:53.303295  352121 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 03:05:53.303311  352121 cache.go:65] Caching tarball of preloaded images
	I1219 03:05:53.303351  352121 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:05:53.303428  352121 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:05:53.303442  352121 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 03:05:53.303571  352121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json ...
	I1219 03:05:53.330621  352121 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:05:53.330652  352121 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:05:53.330671  352121 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:05:53.330767  352121 start.go:360] acquireMachinesLock for default-k8s-diff-port-717222: {Name:mk586c44d14a94c58ce6de7426a02f7319eb0716 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:53.330860  352121 start.go:364] duration metric: took 49.392µs to acquireMachinesLock for "default-k8s-diff-port-717222"
	I1219 03:05:53.330886  352121 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:05:53.330893  352121 fix.go:54] fixHost starting: 
	I1219 03:05:53.331229  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:53.353305  352121 fix.go:112] recreateIfNeeded on default-k8s-diff-port-717222: state=Stopped err=<nil>
	W1219 03:05:53.353340  352121 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:05:52.784975  350034 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:52.785039  350034 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:05:52.785129  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.788781  350034 addons.go:239] Setting addon default-storageclass=true in "embed-certs-805185"
	W1219 03:05:52.788860  350034 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:05:52.788932  350034 host.go:66] Checking if "embed-certs-805185" exists ...
	I1219 03:05:52.789603  350034 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:52.803693  350034 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:52.803884  350034 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:05:52.803963  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.829487  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.833372  350034 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:52.833419  350034 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:05:52.833502  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.838747  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.863871  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.921723  350034 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:52.937553  350034 node_ready.go:35] waiting up to 6m0s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:52.960476  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:52.967329  350034 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:05:52.987994  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:54.352944  350034 node_ready.go:49] node "embed-certs-805185" is "Ready"
	I1219 03:05:54.352990  350034 node_ready.go:38] duration metric: took 1.41538432s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:54.353006  350034 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:54.353065  350034 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:05:54.864404  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.903892508s)
	I1219 03:05:54.864464  350034 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.897096827s)
	I1219 03:05:54.864523  350034 api_server.go:72] duration metric: took 2.115189515s to wait for apiserver process to appear ...
	I1219 03:05:54.864541  350034 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:05:54.864542  350034 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:54.864474  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.876456755s)
	I1219 03:05:54.864569  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:54.868475  350034 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:05:54.869335  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:54.869359  350034 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:55.365237  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:55.371297  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:55.371332  350034 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:53.354901  352121 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-717222" ...
	I1219 03:05:53.354983  352121 cli_runner.go:164] Run: docker start default-k8s-diff-port-717222
	I1219 03:05:53.699823  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:53.723885  352121 kic.go:430] container "default-k8s-diff-port-717222" state is running.
	I1219 03:05:53.724437  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:53.748911  352121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json ...
	I1219 03:05:53.749250  352121 machine.go:94] provisionDockerMachine start ...
	I1219 03:05:53.749328  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:53.779816  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:53.780159  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:53.780174  352121 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:05:53.780830  352121 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1219 03:05:56.928761  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-717222
	
	I1219 03:05:56.928788  352121 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-717222"
	I1219 03:05:56.928865  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:56.949113  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:56.949333  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:56.949347  352121 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-717222 && echo "default-k8s-diff-port-717222" | sudo tee /etc/hostname
	I1219 03:05:57.104743  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-717222
	
	I1219 03:05:57.104815  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.123728  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:57.124049  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:57.124072  352121 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-717222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-717222/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-717222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:05:57.273322  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:05:57.273360  352121 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 03:05:57.273404  352121 ubuntu.go:190] setting up certificates
	I1219 03:05:57.273418  352121 provision.go:84] configureAuth start
	I1219 03:05:57.273481  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:57.295306  352121 provision.go:143] copyHostCerts
	I1219 03:05:57.295369  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem, removing ...
	I1219 03:05:57.295386  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem
	I1219 03:05:57.295456  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 03:05:57.295579  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem, removing ...
	I1219 03:05:57.295589  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem
	I1219 03:05:57.295628  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 03:05:57.295782  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem, removing ...
	I1219 03:05:57.295804  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem
	I1219 03:05:57.295852  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 03:05:57.295936  352121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-717222 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-717222 localhost minikube]
	I1219 03:05:57.394982  352121 provision.go:177] copyRemoteCerts
	I1219 03:05:57.395055  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:05:57.395092  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.415801  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:57.518597  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:05:57.537101  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:05:57.555959  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:05:57.575813  352121 provision.go:87] duration metric: took 302.381293ms to configureAuth
	I1219 03:05:57.575844  352121 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:05:57.576048  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:57.576149  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.595953  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:57.596182  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:57.596200  352121 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:05:58.066496  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:05:58.066522  352121 machine.go:97] duration metric: took 4.317251418s to provisionDockerMachine
	I1219 03:05:58.066537  352121 start.go:293] postStartSetup for "default-k8s-diff-port-717222" (driver="docker")
	I1219 03:05:58.066550  352121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:05:58.066636  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:05:58.066688  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.089735  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:55.753357  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:05:55.865655  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:55.871156  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1219 03:05:55.872281  350034 api_server.go:141] control plane version: v1.34.3
	I1219 03:05:55.872314  350034 api_server.go:131] duration metric: took 1.007762314s to wait for apiserver health ...
	I1219 03:05:55.872322  350034 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:55.876404  350034 system_pods.go:59] 8 kube-system pods found
	I1219 03:05:55.876440  350034 system_pods.go:61] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:55.876448  350034 system_pods.go:61] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:55.876455  350034 system_pods.go:61] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:55.876464  350034 system_pods.go:61] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:55.876469  350034 system_pods.go:61] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:55.876474  350034 system_pods.go:61] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:55.876484  350034 system_pods.go:61] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:55.876491  350034 system_pods.go:61] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:55.876501  350034 system_pods.go:74] duration metric: took 4.173475ms to wait for pod list to return data ...
	I1219 03:05:55.876508  350034 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:55.879480  350034 default_sa.go:45] found service account: "default"
	I1219 03:05:55.879505  350034 default_sa.go:55] duration metric: took 2.991473ms for default service account to be created ...
	I1219 03:05:55.879514  350034 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:55.882762  350034 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:55.882799  350034 system_pods.go:89] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:55.882811  350034 system_pods.go:89] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:55.882822  350034 system_pods.go:89] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:55.882831  350034 system_pods.go:89] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:55.882844  350034 system_pods.go:89] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:55.882860  350034 system_pods.go:89] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:55.882873  350034 system_pods.go:89] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:55.882885  350034 system_pods.go:89] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:55.882898  350034 system_pods.go:126] duration metric: took 3.377076ms to wait for k8s-apps to be running ...
	I1219 03:05:55.882911  350034 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:55.882965  350034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:58.664447  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (2.910960851s)
	I1219 03:05:58.664545  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:58.664639  350034 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.781656633s)
	I1219 03:05:58.664656  350034 system_svc.go:56] duration metric: took 2.781742979s WaitForService to wait for kubelet
	I1219 03:05:58.664665  350034 kubeadm.go:587] duration metric: took 5.915331625s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:58.664687  350034 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:58.675538  350034 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:58.675644  350034 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:58.675679  350034 node_conditions.go:105] duration metric: took 10.98619ms to run NodePressure ...
	I1219 03:05:58.675732  350034 start.go:242] waiting for startup goroutines ...
	I1219 03:05:58.853716  350034 addons.go:500] Verifying addon dashboard=true in "embed-certs-805185"
	I1219 03:05:58.854075  350034 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:58.881488  350034 out.go:179] * Verifying dashboard addon...
	I1219 03:05:58.197271  352121 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:05:58.201133  352121 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:05:58.201178  352121 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:05:58.201190  352121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 03:05:58.201247  352121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 03:05:58.201317  352121 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem -> 85362.pem in /etc/ssl/certs
	I1219 03:05:58.201402  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:05:58.208949  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:05:58.226900  352121 start.go:296] duration metric: took 160.349121ms for postStartSetup
	I1219 03:05:58.227004  352121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:05:58.227051  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.247047  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.350362  352121 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:05:58.356036  352121 fix.go:56] duration metric: took 5.025136862s for fixHost
	I1219 03:05:58.356065  352121 start.go:83] releasing machines lock for "default-k8s-diff-port-717222", held for 5.025188602s
	I1219 03:05:58.356141  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:58.375473  352121 ssh_runner.go:195] Run: cat /version.json
	I1219 03:05:58.375532  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.375568  352121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:05:58.375641  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.397142  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.397516  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.556811  352121 ssh_runner.go:195] Run: systemctl --version
	I1219 03:05:58.565677  352121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:05:58.614666  352121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:05:58.621187  352121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:05:58.621270  352121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:05:58.633014  352121 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:05:58.633047  352121 start.go:496] detecting cgroup driver to use...
	I1219 03:05:58.633088  352121 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:05:58.633137  352121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:05:58.657800  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:05:58.678804  352121 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:05:58.678883  352121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:05:58.705625  352121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:05:58.728926  352121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:05:58.831932  352121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:05:58.929486  352121 docker.go:234] disabling docker service ...
	I1219 03:05:58.929541  352121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:05:58.944770  352121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:05:58.958306  352121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:05:59.062259  352121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:05:59.150570  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:05:59.163650  352121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:05:59.178297  352121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:05:59.178363  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.187297  352121 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 03:05:59.187364  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.198433  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.208772  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.217632  352121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:05:59.226347  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.236018  352121 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.245884  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.255295  352121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:05:59.263398  352121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:05:59.271671  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:59.372273  352121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:05:59.546574  352121 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:05:59.546667  352121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:05:59.552176  352121 start.go:564] Will wait 60s for crictl version
	I1219 03:05:59.552257  352121 ssh_runner.go:195] Run: which crictl
	I1219 03:05:59.557121  352121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:05:59.591016  352121 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 03:05:59.591106  352121 ssh_runner.go:195] Run: crio --version
	I1219 03:05:59.628329  352121 ssh_runner.go:195] Run: crio --version
	I1219 03:05:59.666833  352121 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1219 03:05:58.884349  350034 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:05:58.887370  350034 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:05:58.887396  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.389654  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.888847  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.388430  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.668307  352121 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-717222 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:05:59.692145  352121 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1219 03:05:59.697760  352121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:59.712259  352121 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:05:59.712414  352121 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:05:59.712469  352121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:59.758952  352121 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:59.758977  352121 crio.go:433] Images already preloaded, skipping extraction
	I1219 03:05:59.759041  352121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:59.795008  352121 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:59.795037  352121 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:05:59.795046  352121 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.3 crio true true} ...
	I1219 03:05:59.795178  352121 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-717222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:05:59.795259  352121 ssh_runner.go:195] Run: crio config
	I1219 03:05:59.864102  352121 cni.go:84] Creating CNI manager for ""
	I1219 03:05:59.864128  352121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:59.864150  352121 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:05:59.864179  352121 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-717222 NodeName:default-k8s-diff-port-717222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:05:59.864362  352121 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-717222"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:05:59.864462  352121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:05:59.875851  352121 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:05:59.875922  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:05:59.884464  352121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1219 03:05:59.899134  352121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:05:59.915215  352121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1219 03:05:59.929131  352121 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:05:59.933193  352121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:59.944310  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:06:00.032535  352121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:06:00.050232  352121 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222 for IP: 192.168.94.2
	I1219 03:06:00.050255  352121 certs.go:195] generating shared ca certs ...
	I1219 03:06:00.050283  352121 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.050682  352121 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 03:06:00.050862  352121 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 03:06:00.050911  352121 certs.go:257] generating profile certs ...
	I1219 03:06:00.051065  352121 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/client.key
	I1219 03:06:00.051146  352121 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.key.a5493f25
	I1219 03:06:00.051208  352121 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.key
	I1219 03:06:00.051349  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem (1338 bytes)
	W1219 03:06:00.051384  352121 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536_empty.pem, impossibly tiny 0 bytes
	I1219 03:06:00.051393  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:06:00.051430  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:06:00.051457  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:06:00.051489  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 03:06:00.051550  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:06:00.053211  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:06:00.075839  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:06:00.100173  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:06:00.124618  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 03:06:00.149956  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:06:00.170353  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:06:00.189275  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:06:00.209284  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 03:06:00.228628  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:06:00.250172  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem --> /usr/share/ca-certificates/8536.pem (1338 bytes)
	I1219 03:06:00.275836  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /usr/share/ca-certificates/85362.pem (1708 bytes)
	I1219 03:06:00.299659  352121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:06:00.316102  352121 ssh_runner.go:195] Run: openssl version
	I1219 03:06:00.322905  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.332781  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:06:00.343197  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.348016  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.348075  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.393419  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:06:00.402299  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.411729  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8536.pem /etc/ssl/certs/8536.pem
	I1219 03:06:00.420351  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.424769  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:33 /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.424832  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.477146  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:06:00.485938  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.495504  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85362.pem /etc/ssl/certs/85362.pem
	I1219 03:06:00.504981  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.509807  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:33 /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.509870  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.553357  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:06:00.563012  352121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:06:00.568396  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:06:00.622821  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:06:00.682127  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:06:00.745366  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:06:00.806418  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:06:00.854164  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:06:00.896520  352121 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:06:00.896640  352121 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:06:00.896757  352121 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:06:00.932488  352121 cri.go:92] found id: "1340a2f59347d94cb67897c14b11276bb491943caa3b94ed7080db25e5d5dc78"
	I1219 03:06:00.932513  352121 cri.go:92] found id: "725faee3812c5d3136b764571b2c9b270517680beb590d21de31aad0b5d0b89a"
	I1219 03:06:00.932519  352121 cri.go:92] found id: "d2c496c53c69616e93f161ca59c69ebaa3d4de81e94460893270a29e321886eb"
	I1219 03:06:00.932523  352121 cri.go:92] found id: "0fb4e8910a64fc9fe34f2319ccb58566695768a8c137e01b391e3004481cb992"
	I1219 03:06:00.932538  352121 cri.go:92] found id: ""
	I1219 03:06:00.932585  352121 ssh_runner.go:195] Run: sudo runc list -f json
	W1219 03:06:00.949831  352121 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:06:00Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:06:00.949907  352121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:06:00.960453  352121 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:06:00.960473  352121 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:06:00.960520  352121 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:06:00.969747  352121 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:06:00.971295  352121 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-717222" does not appear in /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:06:00.972255  352121 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-4987/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-717222" cluster setting kubeconfig missing "default-k8s-diff-port-717222" context setting]
	I1219 03:06:00.973245  352121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.975092  352121 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:06:00.986333  352121 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1219 03:06:00.986370  352121 kubeadm.go:602] duration metric: took 25.891084ms to restartPrimaryControlPlane
	I1219 03:06:00.986381  352121 kubeadm.go:403] duration metric: took 89.870495ms to StartCluster
	I1219 03:06:00.986399  352121 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.986465  352121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:06:00.988984  352121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.989885  352121 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:06:00.990020  352121 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:06:00.990126  352121 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990162  352121 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-717222"
	W1219 03:06:00.990171  352121 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:06:00.990156  352121 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990202  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:00.990217  352121 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-717222"
	W1219 03:06:00.990229  352121 addons.go:248] addon dashboard should already be in state true
	I1219 03:06:00.990249  352121 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990136  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:06:00.990260  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:00.990288  352121 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-717222"
	I1219 03:06:00.990658  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.990728  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.990961  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.993354  352121 out.go:179] * Verifying Kubernetes components...
	I1219 03:06:00.994888  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:06:01.021617  352121 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:06:01.021640  352121 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:06:01.021711  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.025911  352121 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:06:01.027271  352121 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:06:01.027318  352121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:06:01.027479  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.027961  352121 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-717222"
	W1219 03:06:01.027983  352121 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:06:01.028009  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:01.028455  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:01.058853  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.066205  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.072839  352121 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:06:01.072864  352121 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:06:01.072921  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.100032  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.171523  352121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:06:01.187709  352121 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:06:01.188379  352121 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:06:01.193383  352121 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:06:01.197836  352121 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:06:01.203342  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:06:01.235359  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:06:02.386268  352121 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.188390942s)
	I1219 03:06:02.386368  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:06:03.037231  352121 node_ready.go:49] node "default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:03.037269  352121 node_ready.go:38] duration metric: took 1.848860443s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:06:03.037285  352121 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:06:03.037340  352121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:06:00.888463  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.390478  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.889172  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.392135  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.887803  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.395356  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.887597  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.388564  350034 kapi.go:107] duration metric: took 5.504214625s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:06:04.390500  350034 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-805185 addons enable metrics-server
	
	I1219 03:06:04.392729  350034 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1219 03:06:04.394312  350034 addons.go:546] duration metric: took 11.644922855s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:06:04.394358  350034 start.go:247] waiting for cluster config update ...
	I1219 03:06:04.394374  350034 start.go:256] writing updated cluster config ...
	I1219 03:06:04.394679  350034 ssh_runner.go:195] Run: rm -f paused
	I1219 03:06:04.399689  350034 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:04.404337  350034 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:03.829353  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.625958762s)
	I1219 03:06:03.829435  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.59404762s)
	I1219 03:06:06.147413  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.761002493s)
	I1219 03:06:06.147503  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:06:06.147635  352121 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.110275887s)
	I1219 03:06:06.147788  352121 api_server.go:72] duration metric: took 5.157862117s to wait for apiserver process to appear ...
	I1219 03:06:06.147803  352121 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:06:06.147822  352121 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:06:06.158489  352121 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1219 03:06:06.160663  352121 api_server.go:141] control plane version: v1.34.3
	I1219 03:06:06.160691  352121 api_server.go:131] duration metric: took 12.881794ms to wait for apiserver health ...
	I1219 03:06:06.160726  352121 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:06:06.167594  352121 system_pods.go:59] 8 kube-system pods found
	I1219 03:06:06.167645  352121 system_pods.go:61] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:06:06.167662  352121 system_pods.go:61] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:06:06.167671  352121 system_pods.go:61] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:06:06.167680  352121 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:06:06.167689  352121 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:06:06.167695  352121 system_pods.go:61] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:06:06.167733  352121 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:06:06.167740  352121 system_pods.go:61] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:06:06.167750  352121 system_pods.go:74] duration metric: took 7.015044ms to wait for pod list to return data ...
	I1219 03:06:06.167763  352121 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:06:06.172355  352121 default_sa.go:45] found service account: "default"
	I1219 03:06:06.172383  352121 default_sa.go:55] duration metric: took 4.612941ms for default service account to be created ...
	I1219 03:06:06.172394  352121 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:06:06.266672  352121 system_pods.go:86] 8 kube-system pods found
	I1219 03:06:06.266726  352121 system_pods.go:89] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:06:06.266739  352121 system_pods.go:89] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:06:06.266748  352121 system_pods.go:89] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:06:06.266766  352121 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:06:06.266780  352121 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:06:06.266794  352121 system_pods.go:89] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:06:06.266807  352121 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:06:06.266814  352121 system_pods.go:89] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:06:06.266828  352121 system_pods.go:126] duration metric: took 94.426368ms to wait for k8s-apps to be running ...
	I1219 03:06:06.266837  352121 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:06:06.266897  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:06:06.349488  352121 system_svc.go:56] duration metric: took 82.641061ms WaitForService to wait for kubelet
	I1219 03:06:06.349525  352121 kubeadm.go:587] duration metric: took 5.359602561s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:06:06.349549  352121 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:06:06.350103  352121 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-717222"
	I1219 03:06:06.350491  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:06.365789  352121 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:06:06.365826  352121 node_conditions.go:123] node cpu capacity is 8
	I1219 03:06:06.365844  352121 node_conditions.go:105] duration metric: took 16.288389ms to run NodePressure ...
	I1219 03:06:06.365861  352121 start.go:242] waiting for startup goroutines ...
	I1219 03:06:06.386126  352121 out.go:179] * Verifying dashboard addon...
	I1219 03:06:06.389114  352121 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:06:06.393439  352121 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:06:07.395667  352121 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:06:07.395691  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.895613  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:06:06.411012  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:08.413428  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:08.393607  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.894323  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.395092  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.892933  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.393259  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.894360  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.394632  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.893492  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.393217  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.892698  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.423636  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.893633  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.392860  352121 kapi.go:107] duration metric: took 8.003743298s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:06:14.394678  352121 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-717222 addons enable metrics-server
	
	I1219 03:06:14.396056  352121 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1219 03:06:10.911668  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:13.410917  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:15.411844  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:14.397218  352121 addons.go:546] duration metric: took 13.407198611s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:06:14.397266  352121 start.go:247] waiting for cluster config update ...
	I1219 03:06:14.397281  352121 start.go:256] writing updated cluster config ...
	I1219 03:06:14.397606  352121 ssh_runner.go:195] Run: rm -f paused
	I1219 03:06:14.402992  352121 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:14.409994  352121 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	W1219 03:06:16.416124  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:17.411983  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:19.909949  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:18.416198  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:20.916272  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:21.910435  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:24.410080  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:23.415501  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:25.416077  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:27.916104  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:26.910790  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:29.410901  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:30.410731  350034 pod_ready.go:94] pod "coredns-66bc5c9577-8gphx" is "Ready"
	I1219 03:06:30.410767  350034 pod_ready.go:86] duration metric: took 26.00640693s for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.413681  350034 pod_ready.go:83] waiting for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.418650  350034 pod_ready.go:94] pod "etcd-embed-certs-805185" is "Ready"
	I1219 03:06:30.418674  350034 pod_ready.go:86] duration metric: took 4.958889ms for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.420801  350034 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.424880  350034 pod_ready.go:94] pod "kube-apiserver-embed-certs-805185" is "Ready"
	I1219 03:06:30.424905  350034 pod_ready.go:86] duration metric: took 4.082079ms for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.427435  350034 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.608611  350034 pod_ready.go:94] pod "kube-controller-manager-embed-certs-805185" is "Ready"
	I1219 03:06:30.608639  350034 pod_ready.go:86] duration metric: took 181.178673ms for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.808837  350034 pod_ready.go:83] waiting for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.208761  350034 pod_ready.go:94] pod "kube-proxy-p8pqg" is "Ready"
	I1219 03:06:31.208794  350034 pod_ready.go:86] duration metric: took 399.926497ms for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.408111  350034 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.808631  350034 pod_ready.go:94] pod "kube-scheduler-embed-certs-805185" is "Ready"
	I1219 03:06:31.808661  350034 pod_ready.go:86] duration metric: took 400.520634ms for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.808676  350034 pod_ready.go:40] duration metric: took 27.408945135s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:31.854031  350034 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:06:31.855666  350034 out.go:179] * Done! kubectl is now configured to use "embed-certs-805185" cluster and "default" namespace by default
	W1219 03:06:29.916349  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:31.916589  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:34.416790  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:36.915948  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	I1219 03:06:37.416133  352121 pod_ready.go:94] pod "coredns-66bc5c9577-dskxl" is "Ready"
	I1219 03:06:37.416166  352121 pod_ready.go:86] duration metric: took 23.006142155s for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.418804  352121 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.423037  352121 pod_ready.go:94] pod "etcd-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.423066  352121 pod_ready.go:86] duration metric: took 4.235839ms for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.425227  352121 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.429099  352121 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.429133  352121 pod_ready.go:86] duration metric: took 3.885497ms for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.430894  352121 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.614208  352121 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.614234  352121 pod_ready.go:86] duration metric: took 183.319695ms for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.814884  352121 pod_ready.go:83] waiting for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.214236  352121 pod_ready.go:94] pod "kube-proxy-mr7c8" is "Ready"
	I1219 03:06:38.214264  352121 pod_ready.go:86] duration metric: took 399.351737ms for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.413883  352121 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.813953  352121 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:38.813980  352121 pod_ready.go:86] duration metric: took 400.070957ms for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.813992  352121 pod_ready.go:40] duration metric: took 24.410965975s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:38.857548  352121 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:06:38.860155  352121 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-717222" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.736394898Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=717982ce-b0aa-47e4-97b9-7ccc9a3d471e name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.737528512Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=75c156d7-ee04-4c56-8f11-b0efea376553 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.737669801Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.742166616Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.742306458Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4ebf7307f813aac6a8d23d828e41e3d28400a9be6b5a1e96c9c7ac302e20c6f7/merged/etc/passwd: no such file or directory"
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.742328757Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4ebf7307f813aac6a8d23d828e41e3d28400a9be6b5a1e96c9c7ac302e20c6f7/merged/etc/group: no such file or directory"
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.742530495Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.773812294Z" level=info msg="Created container 7d6861325db2a3ac5ccac816c8e37a3daba04c6ecb4c268229d0eed2b47c364f: kube-system/storage-provisioner/storage-provisioner" id=75c156d7-ee04-4c56-8f11-b0efea376553 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.774507779Z" level=info msg="Starting container: 7d6861325db2a3ac5ccac816c8e37a3daba04c6ecb4c268229d0eed2b47c364f" id=d7a3d3c0-89c8-443d-926a-bf8921e3011d name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.776440067Z" level=info msg="Started container" PID=3331 containerID=7d6861325db2a3ac5ccac816c8e37a3daba04c6ecb4c268229d0eed2b47c364f description=kube-system/storage-provisioner/storage-provisioner id=d7a3d3c0-89c8-443d-926a-bf8921e3011d name=/runtime.v1.RuntimeService/StartContainer sandboxID=c464fbce01c73bc9002a59a55e969a9dcc96c829129ee9c487d0762b3a2a4169
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.362057944Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.366564465Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.366589659Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.366607882Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.370444341Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.370467276Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.370484152Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.374344046Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.374374846Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.374396298Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.378400072Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.378429166Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.378444369Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.382115308Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.382141451Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	7d6861325db2a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Running             storage-provisioner                    1                   c464fbce01c73       storage-provisioner                                     kube-system
	5935e257f3a09       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              18 minutes ago      Running             kubernetes-dashboard-auth              0                   d0d6b23f0e1dc       kubernetes-dashboard-auth-bf9cfccb5-mrw8q               kubernetes-dashboard
	29fec7f14635a       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   18 minutes ago      Running             kubernetes-dashboard-metrics-scraper   0                   0e0159aebbb3f       kubernetes-dashboard-metrics-scraper-594bbfb84b-ncxlk   kubernetes-dashboard
	94493b4e71313       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                           18 minutes ago      Running             proxy                                  0                   a29ecbc685444       kubernetes-dashboard-kong-78b7499b45-z266g              kubernetes-dashboard
	0c57b1705660a       docker.io/library/kong@sha256:73ac10ce4d2c5b3b8b4acd6c8117b4e72d1a201d95be2d51aeae8324d776a108                             18 minutes ago      Exited              clear-stale-pid                        0                   a29ecbc685444       kubernetes-dashboard-kong-78b7499b45-z266g              kubernetes-dashboard
	bba0b0d89d520       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               18 minutes ago      Running             kubernetes-dashboard-web               0                   8dedb4931ab92       kubernetes-dashboard-web-7f7574785f-h2jf5               kubernetes-dashboard
	d438e50bdc5cf       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               18 minutes ago      Running             kubernetes-dashboard-api               0                   2d9da507d045f       kubernetes-dashboard-api-c7898775-zhmv8                 kubernetes-dashboard
	88f8999e01d5b       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                           18 minutes ago      Running             coredns                                0                   192133b79d756       coredns-7d764666f9-vj7lm                                kube-system
	53f1be74e873d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Exited              storage-provisioner                    0                   c464fbce01c73       storage-provisioner                                     kube-system
	bf4ed13bede99       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                                           18 minutes ago      Running             busybox                                1                   1a93d07c85274       busybox                                                 default
	98dcabe770e7d       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                                           18 minutes ago      Running             kindnet-cni                            0                   c96cb5fa17a00       kindnet-xrp2s                                           kube-system
	757ccd2caa9cd       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                                           18 minutes ago      Running             kube-proxy                             0                   4e59b01d6de99       kube-proxy-g2gm4                                        kube-system
	5f148a7e487d8       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                                           18 minutes ago      Running             etcd                                   0                   03f900ecc7129       etcd-no-preload-278042                                  kube-system
	001407ac1b909       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                                           18 minutes ago      Running             kube-controller-manager                0                   d44cf856d1c8b       kube-controller-manager-no-preload-278042               kube-system
	973ccccab2576       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                                           18 minutes ago      Running             kube-scheduler                         0                   3f68017fcfb0f       kube-scheduler-no-preload-278042                        kube-system
	821b9cbc72eb6       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                                           18 minutes ago      Running             kube-apiserver                         0                   46991eb1a5abd       kube-apiserver-no-preload-278042                        kube-system
	
	
	==> coredns [88f8999e01d5bc23ebc968525542d039ae5c65ebd88f7ecad360345dc8277d94] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:57319 - 34037 "HINFO IN 3016703752619529984.3565104935656887276. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019206295s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-278042
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-278042
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=no-preload-278042
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_04_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:04:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-278042
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:23:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:23:43 +0000   Fri, 19 Dec 2025 03:04:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:23:43 +0000   Fri, 19 Dec 2025 03:04:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:23:43 +0000   Fri, 19 Dec 2025 03:04:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:23:43 +0000   Fri, 19 Dec 2025 03:04:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-278042
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                8fbc19b8-72f7-4938-83d9-fc3015dde7d1
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-7d764666f9-vj7lm                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-no-preload-278042                                   100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-xrp2s                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-no-preload-278042                         250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-no-preload-278042                200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-g2gm4                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-no-preload-278042                         100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        kubernetes-dashboard-api-c7898775-zhmv8                  100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-auth-bf9cfccb5-mrw8q                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-kong-78b7499b45-z266g               0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-594bbfb84b-ncxlk    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-web-7f7574785f-h2jf5                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1250m (15%)  1100m (13%)
	  memory             1020Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  19m   node-controller  Node no-preload-278042 event: Registered Node no-preload-278042 in Controller
	  Normal  RegisteredNode  18m   node-controller  Node no-preload-278042 event: Registered Node no-preload-278042 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [5f148a7e487d8dd3e55b516e027c5d508a26bdfe82dca079a9e980052c56ba2a] <==
	{"level":"info","ts":"2025-12-19T03:05:08.315130Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-19T03:05:08.988344Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-19T03:05:08.988403Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-19T03:05:08.988503Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-19T03:05:08.988524Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:05:08.988542Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-19T03:05:08.989244Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-19T03:05:08.989319Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:05:08.989346Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-19T03:05:08.989356Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-19T03:05:08.990632Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:no-preload-278042 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-19T03:05:08.990634Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:05:08.990681Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:05:08.990883Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T03:05:08.991615Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T03:05:08.992858Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:05:08.993684Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:05:09.001234Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T03:05:09.001416Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-19T03:15:09.026171Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":960}
	{"level":"info","ts":"2025-12-19T03:15:09.034559Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":960,"took":"7.955659ms","hash":4263527716,"current-db-size-bytes":3899392,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":3899392,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2025-12-19T03:15:09.034609Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4263527716,"revision":960,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:20:09.031768Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1204}
	{"level":"info","ts":"2025-12-19T03:20:09.034352Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1204,"took":"2.163711ms","hash":2275355149,"current-db-size-bytes":3899392,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":1998848,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-12-19T03:20:09.034391Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2275355149,"revision":1204,"compact-revision":960}
	
	
	==> kernel <==
	 03:23:53 up  1:06,  0 user,  load average: 0.42, 0.52, 1.13
	Linux no-preload-278042 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [98dcabe770e7dcf718bfbc7938b663e3dd19fd9ad86c2bd261a4099febad9b1b] <==
	I1219 03:21:51.360619       1 main.go:301] handling current node
	I1219 03:22:01.369216       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:01.369247       1 main.go:301] handling current node
	I1219 03:22:11.367736       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:11.367767       1 main.go:301] handling current node
	I1219 03:22:21.362127       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:21.362158       1 main.go:301] handling current node
	I1219 03:22:31.365054       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:31.365106       1 main.go:301] handling current node
	I1219 03:22:41.367805       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:41.367840       1 main.go:301] handling current node
	I1219 03:22:51.360347       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:51.360384       1 main.go:301] handling current node
	I1219 03:23:01.367434       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:23:01.367473       1 main.go:301] handling current node
	I1219 03:23:11.368784       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:23:11.368827       1 main.go:301] handling current node
	I1219 03:23:21.360067       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:23:21.360103       1 main.go:301] handling current node
	I1219 03:23:31.360833       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:23:31.360886       1 main.go:301] handling current node
	I1219 03:23:41.366471       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:23:41.366500       1 main.go:301] handling current node
	I1219 03:23:51.360858       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:23:51.360894       1 main.go:301] handling current node
	
	
	==> kube-apiserver [821b9cbc72eb687248584e5b24c8fee86a84d2d127bb4360eeb9e0bc0f2c67ec] <==
	W1219 03:05:13.385125       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.401923       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.413483       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.423560       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.434652       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.450356       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.470070       1 logging.go:55] [core] [Channel #283 SubChannel #284]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.481151       1 logging.go:55] [core] [Channel #287 SubChannel #288]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.492407       1 logging.go:55] [core] [Channel #291 SubChannel #292]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.503960       1 logging.go:55] [core] [Channel #295 SubChannel #296]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.519221       1 logging.go:55] [core] [Channel #299 SubChannel #300]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.528090       1 logging.go:55] [core] [Channel #303 SubChannel #304]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1219 03:05:13.711310       1 controller.go:667] quota admission added evaluator for: endpoints
	I1219 03:05:13.761392       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:05:13.862098       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:05:13.961908       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 03:05:15.702973       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 03:05:15.771287       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:05:15.776040       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:05:15.788145       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.102.118.21"}
	I1219 03:05:15.795336       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.103.152.147"}
	I1219 03:05:15.798838       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.54.162"}
	I1219 03:05:15.807348       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.97.173.60"}
	I1219 03:05:15.813204       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.106.235.156"}
	I1219 03:15:10.324126       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [001407ac1b90920309b7c79710e785dd82d392c6a8a43c444682d0837efd8cae] <==
	I1219 03:05:13.463362       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463414       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463386       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1219 03:05:13.463438       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463465       1 range_allocator.go:177] "Sending events to api server"
	I1219 03:05:13.463505       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1219 03:05:13.463516       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:05:13.463521       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463634       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463681       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463711       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464012       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464187       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464219       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464367       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464376       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464393       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464411       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.472055       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:05:14.564522       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:14.564546       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1219 03:05:14.564553       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:14.564553       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1219 03:05:14.572694       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:14.581900       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [757ccd2caa9cd35651079514b95b85a3612146f0d5b17fa735322d1e2ee036f1] <==
	I1219 03:05:11.015248       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:05:11.078140       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:05:11.178544       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:11.178579       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1219 03:05:11.178664       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:05:11.202324       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:05:11.202395       1 server_linux.go:136] "Using iptables Proxier"
	I1219 03:05:11.207676       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:05:11.208164       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 03:05:11.208215       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:11.212272       1 config.go:200] "Starting service config controller"
	I1219 03:05:11.212297       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:05:11.212328       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:05:11.212333       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:05:11.212401       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:05:11.212410       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:05:11.212604       1 config.go:309] "Starting node config controller"
	I1219 03:05:11.212646       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:05:11.212671       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:05:11.313219       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:05:11.313270       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:05:11.313557       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [973ccccab25764a0f9dd88e6a5d1d5565060554451f1dcc3dd0e68a9aed4c9f2] <==
	I1219 03:05:08.762319       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:05:10.311124       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:05:10.311291       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:05:10.311314       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:05:10.311345       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:05:10.339015       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1219 03:05:10.339346       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:10.343655       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:10.343694       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:05:10.345418       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:05:10.347040       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:05:10.447312       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 19 03:19:31 no-preload-278042 kubelet[713]: E1219 03:19:31.563481     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-278042" containerName="kube-scheduler"
	Dec 19 03:19:45 no-preload-278042 kubelet[713]: E1219 03:19:45.563073     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-278042" containerName="kube-apiserver"
	Dec 19 03:19:59 no-preload-278042 kubelet[713]: E1219 03:19:59.562772     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vj7lm" containerName="coredns"
	Dec 19 03:20:05 no-preload-278042 kubelet[713]: E1219 03:20:05.563051     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-278042" containerName="kube-controller-manager"
	Dec 19 03:20:26 no-preload-278042 kubelet[713]: E1219 03:20:26.562657     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-278042" containerName="etcd"
	Dec 19 03:20:30 no-preload-278042 kubelet[713]: E1219 03:20:30.562630     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-z266g" containerName="proxy"
	Dec 19 03:20:30 no-preload-278042 kubelet[713]: E1219 03:20:30.562787     713 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-ncxlk" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:20:48 no-preload-278042 kubelet[713]: E1219 03:20:48.562287     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-278042" containerName="kube-scheduler"
	Dec 19 03:20:54 no-preload-278042 kubelet[713]: E1219 03:20:54.562796     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-278042" containerName="kube-apiserver"
	Dec 19 03:21:06 no-preload-278042 kubelet[713]: E1219 03:21:06.562680     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-278042" containerName="kube-controller-manager"
	Dec 19 03:21:11 no-preload-278042 kubelet[713]: E1219 03:21:11.563417     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vj7lm" containerName="coredns"
	Dec 19 03:21:28 no-preload-278042 kubelet[713]: E1219 03:21:28.562333     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-278042" containerName="etcd"
	Dec 19 03:21:31 no-preload-278042 kubelet[713]: E1219 03:21:31.563340     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-z266g" containerName="proxy"
	Dec 19 03:21:53 no-preload-278042 kubelet[713]: E1219 03:21:53.563344     713 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-ncxlk" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:22:03 no-preload-278042 kubelet[713]: E1219 03:22:03.563479     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-278042" containerName="kube-scheduler"
	Dec 19 03:22:18 no-preload-278042 kubelet[713]: E1219 03:22:18.562844     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-278042" containerName="kube-apiserver"
	Dec 19 03:22:26 no-preload-278042 kubelet[713]: E1219 03:22:26.562406     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-278042" containerName="kube-controller-manager"
	Dec 19 03:22:37 no-preload-278042 kubelet[713]: E1219 03:22:37.563042     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vj7lm" containerName="coredns"
	Dec 19 03:22:49 no-preload-278042 kubelet[713]: E1219 03:22:49.563063     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-z266g" containerName="proxy"
	Dec 19 03:22:54 no-preload-278042 kubelet[713]: E1219 03:22:54.563196     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-278042" containerName="etcd"
	Dec 19 03:23:18 no-preload-278042 kubelet[713]: E1219 03:23:18.563266     713 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-ncxlk" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:23:26 no-preload-278042 kubelet[713]: E1219 03:23:26.562431     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-278042" containerName="kube-scheduler"
	Dec 19 03:23:48 no-preload-278042 kubelet[713]: E1219 03:23:48.562683     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-278042" containerName="kube-apiserver"
	Dec 19 03:23:49 no-preload-278042 kubelet[713]: E1219 03:23:49.562573     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vj7lm" containerName="coredns"
	Dec 19 03:23:51 no-preload-278042 kubelet[713]: E1219 03:23:51.563334     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-278042" containerName="kube-controller-manager"
	
	
	==> kubernetes-dashboard [29fec7f14635a794f200efe276e62a0fc3151ea3d427cb21da297c53114fd8b9] <==
	10.244.0.1 - - [19/Dec/2025:03:21:15 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:21:17 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:21:25 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:21:35 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:21:45 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:21:47 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:21:55 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:22:05 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:22:15 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:22:17 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:25 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:22:35 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:22:45 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:22:47 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:55 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:23:05 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:23:15 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:23:17 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:25 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:23:35 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:23:45 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:23:47 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	E1219 03:21:25.195161       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:22:25.195098       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:23:25.194956       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	
	
	==> kubernetes-dashboard [5935e257f3a0964bf239f408c9308c3b84961c75f06f32b2fda50133fe1ddbbd] <==
	I1219 03:05:26.300513       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:05:26.300578       1 init.go:49] Using in-cluster config
	I1219 03:05:26.300723       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [bba0b0d89d520cc6ca6a07611a31a2778cb1e41e66784ac255b63f970adcffb7] <==
	I1219 03:05:19.397607       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:05:19.397662       1 init.go:48] Using in-cluster config
	I1219 03:05:19.397903       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [d438e50bdc5cf86c6ad101cf6a3ca9c6c7091524bb7ffd95705de1d1a5ed8994] <==
	I1219 03:05:17.224225       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:05:17.224299       1 init.go:49] Using in-cluster config
	I1219 03:05:17.224498       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:05:17.224512       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:05:17.224518       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:05:17.224524       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:05:17.230241       1 main.go:119] "Successful initial request to the apiserver" version="v1.35.0-rc.1"
	I1219 03:05:17.230266       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:05:17.233542       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 03:05:17.236374       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	I1219 03:05:47.240946       1 manager.go:101] Successful request to sidecar
	
	
	==> storage-provisioner [53f1be74e873df0c32c600b228ba909dde859aa38c23f9a71f536c90aa4e096f] <==
	I1219 03:05:10.950483       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:05:40.952323       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7d6861325db2a3ac5ccac816c8e37a3daba04c6ecb4c268229d0eed2b47c364f] <==
	W1219 03:23:29.347304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:31.349915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:31.355071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:33.357945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:33.361807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:35.365216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:35.369382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:37.372268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:37.377522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:39.380227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:39.384143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:41.387062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:41.392658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:43.395737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:43.399497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:45.402464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:45.406320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:47.409103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:47.413050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:49.416534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:49.421078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:51.424048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:51.427837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:53.432223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:53.437004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-278042 -n no-preload-278042
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-278042 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (542.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-805185 -n embed-certs-805185
E1219 03:24:34.614400    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:24:34.893053651 +0000 UTC m=+3584.612573039
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-805185 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context embed-certs-805185 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (58.241848ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "dashboard-metrics-scraper" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-805185 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-805185
helpers_test.go:244: (dbg) docker inspect embed-certs-805185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415",
	        "Created": "2025-12-19T03:04:41.634228453Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 350292,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:05:45.883197161Z",
	            "FinishedAt": "2025-12-19T03:05:44.649106592Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415/hostname",
	        "HostsPath": "/var/lib/docker/containers/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415/hosts",
	        "LogPath": "/var/lib/docker/containers/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415-json.log",
	        "Name": "/embed-certs-805185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-805185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-805185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415",
	                "LowerDir": "/var/lib/docker/overlay2/ed796c6e9d2f9edb740649c569172aeee5d6bffb367753798bf2544f5c8616e4-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ed796c6e9d2f9edb740649c569172aeee5d6bffb367753798bf2544f5c8616e4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ed796c6e9d2f9edb740649c569172aeee5d6bffb367753798bf2544f5c8616e4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ed796c6e9d2f9edb740649c569172aeee5d6bffb367753798bf2544f5c8616e4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-805185",
	                "Source": "/var/lib/docker/volumes/embed-certs-805185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-805185",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-805185",
	                "name.minikube.sigs.k8s.io": "embed-certs-805185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7457f8142accad01c6ab136b22c6fa80ee06dd20e79f2a84f99ffb94723b6308",
	            "SandboxKey": "/var/run/docker/netns/7457f8142acc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-805185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "67670b4143fc2c858529db8e9ece90091b3a7a00c5465943bbbbea83d055a550",
	                    "EndpointID": "a46e3becc7625d5ecd97a1cbfefeda9844ff31ce4ce29ae0c0d5c0cbe2af09be",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "d6:26:96:9c:9e:16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-805185",
	                        "c2b5f77a65ce"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-805185 -n embed-certs-805185
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-805185 logs -n 25
E1219 03:24:35.895432    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:36.138768    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-805185 logs -n 25: (1.185998344s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ stop    │ -p old-k8s-version-433330 --alsologtostderr -v=3                                                                                                                                                                                                   │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-278042 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-433330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-433330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p no-preload-278042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-278042 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p embed-certs-805185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p embed-certs-805185 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-717222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-717222 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-805185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-717222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ image   │ old-k8s-version-433330 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p old-k8s-version-433330 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ image   │ no-preload-278042 image list --format=json                                                                                                                                                                                                         │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p no-preload-278042 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ delete  │ -p old-k8s-version-433330                                                                                                                                                                                                                          │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p old-k8s-version-433330                                                                                                                                                                                                                          │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ start   │ -p newest-cni-837172 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p no-preload-278042                                                                                                                                                                                                                               │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p no-preload-278042                                                                                                                                                                                                                               │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ addons  │ enable metrics-server -p newest-cni-837172 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	│ stop    │ -p newest-cni-837172 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:24:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:24:01.036023  371990 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:24:01.036565  371990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:01.036582  371990 out.go:374] Setting ErrFile to fd 2...
	I1219 03:24:01.036589  371990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:01.037114  371990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:24:01.038234  371990 out.go:368] Setting JSON to false
	I1219 03:24:01.039510  371990 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3992,"bootTime":1766110649,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:24:01.039592  371990 start.go:143] virtualization: kvm guest
	I1219 03:24:01.041656  371990 out.go:179] * [newest-cni-837172] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:24:01.043211  371990 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:24:01.043253  371990 notify.go:221] Checking for updates...
	I1219 03:24:01.045604  371990 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:24:01.046873  371990 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:24:01.047985  371990 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:24:01.052214  371990 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:24:01.053413  371990 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:24:01.055079  371990 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:24:01.055198  371990 config.go:182] Loaded profile config "embed-certs-805185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:24:01.055324  371990 config.go:182] Loaded profile config "no-preload-278042": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:01.055430  371990 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:24:01.080518  371990 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:24:01.080672  371990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:01.143010  371990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-19 03:24:01.132535066 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:01.143105  371990 docker.go:319] overlay module found
	I1219 03:24:01.144954  371990 out.go:179] * Using the docker driver based on user configuration
	I1219 03:24:01.146278  371990 start.go:309] selected driver: docker
	I1219 03:24:01.146299  371990 start.go:928] validating driver "docker" against <nil>
	I1219 03:24:01.146315  371990 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:24:01.147198  371990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:01.207023  371990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-19 03:24:01.196664778 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:01.207180  371990 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1219 03:24:01.207207  371990 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1219 03:24:01.207525  371990 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:24:01.209632  371990 out.go:179] * Using Docker driver with root privileges
	I1219 03:24:01.210891  371990 cni.go:84] Creating CNI manager for ""
	I1219 03:24:01.210974  371990 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:24:01.210985  371990 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1219 03:24:01.211049  371990 start.go:353] cluster config:
	{Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:01.212320  371990 out.go:179] * Starting "newest-cni-837172" primary control-plane node in "newest-cni-837172" cluster
	I1219 03:24:01.213422  371990 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:24:01.214779  371990 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:24:01.215953  371990 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:24:01.216006  371990 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1219 03:24:01.216025  371990 cache.go:65] Caching tarball of preloaded images
	I1219 03:24:01.216047  371990 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:24:01.216120  371990 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:24:01.216133  371990 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1219 03:24:01.216218  371990 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/config.json ...
	I1219 03:24:01.216239  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/config.json: {Name:mkf2bb7657c731e279d378a607e1a523b320a47e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:01.237349  371990 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:24:01.237368  371990 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:24:01.237386  371990 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:24:01.237420  371990 start.go:360] acquireMachinesLock for newest-cni-837172: {Name:mk7db0c3e7d0fad2e8a414af15e136d817446719 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:24:01.237512  371990 start.go:364] duration metric: took 75.602µs to acquireMachinesLock for "newest-cni-837172"
	I1219 03:24:01.237534  371990 start.go:93] Provisioning new machine with config: &{Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:24:01.237590  371990 start.go:125] createHost starting for "" (driver="docker")
	I1219 03:24:01.239751  371990 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1219 03:24:01.239974  371990 start.go:159] libmachine.API.Create for "newest-cni-837172" (driver="docker")
	I1219 03:24:01.240017  371990 client.go:173] LocalClient.Create starting
	I1219 03:24:01.240087  371990 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem
	I1219 03:24:01.240117  371990 main.go:144] libmachine: Decoding PEM data...
	I1219 03:24:01.240136  371990 main.go:144] libmachine: Parsing certificate...
	I1219 03:24:01.240185  371990 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem
	I1219 03:24:01.240204  371990 main.go:144] libmachine: Decoding PEM data...
	I1219 03:24:01.240213  371990 main.go:144] libmachine: Parsing certificate...
	I1219 03:24:01.240512  371990 cli_runner.go:164] Run: docker network inspect newest-cni-837172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1219 03:24:01.257883  371990 cli_runner.go:211] docker network inspect newest-cni-837172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1219 03:24:01.258008  371990 network_create.go:284] running [docker network inspect newest-cni-837172] to gather additional debugging logs...
	I1219 03:24:01.258034  371990 cli_runner.go:164] Run: docker network inspect newest-cni-837172
	W1219 03:24:01.275377  371990 cli_runner.go:211] docker network inspect newest-cni-837172 returned with exit code 1
	I1219 03:24:01.275412  371990 network_create.go:287] error running [docker network inspect newest-cni-837172]: docker network inspect newest-cni-837172: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-837172 not found
	I1219 03:24:01.275429  371990 network_create.go:289] output of [docker network inspect newest-cni-837172]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-837172 not found
	
	** /stderr **
	I1219 03:24:01.275535  371990 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:24:01.294388  371990 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d70e62b79a31 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:cf:22:72:cb:a0} reservation:<nil>}
	I1219 03:24:01.295272  371990 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-980aea652065 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ba:dd:9c:97:fb:7d} reservation:<nil>}
	I1219 03:24:01.296258  371990 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-42b42f6a5044 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a2:1e:31:1b:21:84} reservation:<nil>}
	I1219 03:24:01.297569  371990 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec48c0}
	I1219 03:24:01.297599  371990 network_create.go:124] attempt to create docker network newest-cni-837172 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1219 03:24:01.297651  371990 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-837172 newest-cni-837172
	I1219 03:24:01.350655  371990 network_create.go:108] docker network newest-cni-837172 192.168.76.0/24 created
	I1219 03:24:01.350682  371990 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-837172" container
	I1219 03:24:01.350794  371990 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1219 03:24:01.370331  371990 cli_runner.go:164] Run: docker volume create newest-cni-837172 --label name.minikube.sigs.k8s.io=newest-cni-837172 --label created_by.minikube.sigs.k8s.io=true
	I1219 03:24:01.391519  371990 oci.go:103] Successfully created a docker volume newest-cni-837172
	I1219 03:24:01.391624  371990 cli_runner.go:164] Run: docker run --rm --name newest-cni-837172-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-837172 --entrypoint /usr/bin/test -v newest-cni-837172:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1219 03:24:01.840345  371990 oci.go:107] Successfully prepared a docker volume newest-cni-837172
	I1219 03:24:01.840449  371990 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:24:01.840465  371990 kic.go:194] Starting extracting preloaded images to volume ...
	I1219 03:24:01.840529  371990 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-837172:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1219 03:24:05.697885  371990 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-837172:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.857303195s)
	I1219 03:24:05.697924  371990 kic.go:203] duration metric: took 3.857455339s to extract preloaded images to volume ...
	W1219 03:24:05.698024  371990 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1219 03:24:05.698058  371990 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1219 03:24:05.698100  371990 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1219 03:24:05.757547  371990 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-837172 --name newest-cni-837172 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-837172 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-837172 --network newest-cni-837172 --ip 192.168.76.2 --volume newest-cni-837172:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1219 03:24:06.051568  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Running}}
	I1219 03:24:06.072261  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:06.093313  371990 cli_runner.go:164] Run: docker exec newest-cni-837172 stat /var/lib/dpkg/alternatives/iptables
	I1219 03:24:06.144238  371990 oci.go:144] the created container "newest-cni-837172" has a running status.
	I1219 03:24:06.144278  371990 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa...
	I1219 03:24:06.230796  371990 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1219 03:24:06.256299  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:06.273734  371990 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1219 03:24:06.273758  371990 kic_runner.go:114] Args: [docker exec --privileged newest-cni-837172 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1219 03:24:06.341522  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:06.363532  371990 machine.go:94] provisionDockerMachine start ...
	I1219 03:24:06.363655  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:06.390168  371990 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:06.390536  371990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1219 03:24:06.390552  371990 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:24:06.391620  371990 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34054->127.0.0.1:33138: read: connection reset by peer
	I1219 03:24:09.536680  371990 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-837172
	
	I1219 03:24:09.536733  371990 ubuntu.go:182] provisioning hostname "newest-cni-837172"
	I1219 03:24:09.536797  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:09.555045  371990 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:09.555325  371990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1219 03:24:09.555340  371990 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-837172 && echo "newest-cni-837172" | sudo tee /etc/hostname
	I1219 03:24:09.709116  371990 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-837172
	
	I1219 03:24:09.709183  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:09.727847  371990 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:09.728289  371990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1219 03:24:09.728322  371990 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-837172' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-837172/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-837172' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:24:09.871486  371990 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:24:09.871529  371990 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 03:24:09.871588  371990 ubuntu.go:190] setting up certificates
	I1219 03:24:09.871600  371990 provision.go:84] configureAuth start
	I1219 03:24:09.871666  371990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-837172
	I1219 03:24:09.890551  371990 provision.go:143] copyHostCerts
	I1219 03:24:09.890608  371990 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem, removing ...
	I1219 03:24:09.890616  371990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem
	I1219 03:24:09.890710  371990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 03:24:09.890819  371990 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem, removing ...
	I1219 03:24:09.890829  371990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem
	I1219 03:24:09.890867  371990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 03:24:09.890920  371990 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem, removing ...
	I1219 03:24:09.890933  371990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem
	I1219 03:24:09.890959  371990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 03:24:09.891015  371990 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.newest-cni-837172 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-837172]
	I1219 03:24:09.923962  371990 provision.go:177] copyRemoteCerts
	I1219 03:24:09.924021  371990 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:24:09.924055  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:09.943177  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.046012  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:24:10.066001  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1219 03:24:10.083456  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 03:24:10.101464  371990 provision.go:87] duration metric: took 229.847544ms to configureAuth
	I1219 03:24:10.101492  371990 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:24:10.101673  371990 config.go:182] Loaded profile config "newest-cni-837172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:10.101801  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.120532  371990 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:10.120821  371990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1219 03:24:10.120839  371990 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:24:10.410477  371990 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:24:10.410502  371990 machine.go:97] duration metric: took 4.046944113s to provisionDockerMachine
	I1219 03:24:10.410513  371990 client.go:176] duration metric: took 9.170488353s to LocalClient.Create
	I1219 03:24:10.410535  371990 start.go:167] duration metric: took 9.170561433s to libmachine.API.Create "newest-cni-837172"
	I1219 03:24:10.410546  371990 start.go:293] postStartSetup for "newest-cni-837172" (driver="docker")
	I1219 03:24:10.410559  371990 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:24:10.410613  371990 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:24:10.410664  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.430222  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.533641  371990 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:24:10.537745  371990 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:24:10.537783  371990 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:24:10.537806  371990 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 03:24:10.537857  371990 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 03:24:10.537934  371990 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem -> 85362.pem in /etc/ssl/certs
	I1219 03:24:10.538030  371990 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:24:10.545818  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:24:10.566832  371990 start.go:296] duration metric: took 156.272185ms for postStartSetup
	I1219 03:24:10.567244  371990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-837172
	I1219 03:24:10.586641  371990 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/config.json ...
	I1219 03:24:10.586934  371990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:24:10.586987  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.604894  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.703924  371990 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:24:10.708480  371990 start.go:128] duration metric: took 9.470874061s to createHost
	I1219 03:24:10.708519  371990 start.go:83] releasing machines lock for "newest-cni-837172", held for 9.47099552s
	I1219 03:24:10.708596  371990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-837172
	I1219 03:24:10.727823  371990 ssh_runner.go:195] Run: cat /version.json
	I1219 03:24:10.727853  371990 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:24:10.727877  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.727922  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.748155  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.748577  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.899556  371990 ssh_runner.go:195] Run: systemctl --version
	I1219 03:24:10.906157  371990 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:24:10.942010  371990 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:24:10.946776  371990 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:24:10.946834  371990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:24:10.972921  371990 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 03:24:10.972943  371990 start.go:496] detecting cgroup driver to use...
	I1219 03:24:10.972971  371990 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:24:10.973032  371990 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:24:10.989146  371990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:24:11.002203  371990 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:24:11.002282  371990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:24:11.018422  371990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:24:11.035554  371990 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:24:11.119919  371990 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:24:11.207179  371990 docker.go:234] disabling docker service ...
	I1219 03:24:11.207252  371990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:24:11.225572  371990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:24:11.237859  371990 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:24:11.323024  371990 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:24:11.407303  371990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:24:11.419524  371990 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:24:11.433341  371990 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:24:11.433395  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.443408  371990 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 03:24:11.443468  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.452460  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.460889  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.469451  371990 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:24:11.477277  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.485766  371990 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.499106  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.508174  371990 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:24:11.515313  371990 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:24:11.522319  371990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:11.604796  371990 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:24:11.746317  371990 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:24:11.746376  371990 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:24:11.750220  371990 start.go:564] Will wait 60s for crictl version
	I1219 03:24:11.750278  371990 ssh_runner.go:195] Run: which crictl
	I1219 03:24:11.753821  371990 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:24:11.777608  371990 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 03:24:11.777714  371990 ssh_runner.go:195] Run: crio --version
	I1219 03:24:11.804073  371990 ssh_runner.go:195] Run: crio --version
	I1219 03:24:11.833640  371990 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1219 03:24:11.834886  371990 cli_runner.go:164] Run: docker network inspect newest-cni-837172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:24:11.852567  371990 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1219 03:24:11.856667  371990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:24:11.871316  371990 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1219 03:24:11.872497  371990 kubeadm.go:884] updating cluster {Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:24:11.872642  371990 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:24:11.872692  371990 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:24:11.904183  371990 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:24:11.904204  371990 crio.go:433] Images already preloaded, skipping extraction
	I1219 03:24:11.904263  371990 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:24:11.930999  371990 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:24:11.931020  371990 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:24:11.931026  371990 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1219 03:24:11.931148  371990 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-837172 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:24:11.931228  371990 ssh_runner.go:195] Run: crio config
	I1219 03:24:11.976472  371990 cni.go:84] Creating CNI manager for ""
	I1219 03:24:11.976491  371990 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:24:11.976503  371990 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1219 03:24:11.976531  371990 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-837172 NodeName:newest-cni-837172 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:24:11.976658  371990 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-837172"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:24:11.976739  371990 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1219 03:24:11.985021  371990 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:24:11.985080  371990 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:24:11.992859  371990 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1219 03:24:12.006496  371990 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1219 03:24:12.021643  371990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1219 03:24:12.034441  371990 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:24:12.038092  371990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:24:12.047986  371990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:12.128789  371990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:24:12.152988  371990 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172 for IP: 192.168.76.2
	I1219 03:24:12.153016  371990 certs.go:195] generating shared ca certs ...
	I1219 03:24:12.153035  371990 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.153175  371990 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 03:24:12.153220  371990 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 03:24:12.153233  371990 certs.go:257] generating profile certs ...
	I1219 03:24:12.153289  371990 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.key
	I1219 03:24:12.153302  371990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.crt with IP's: []
	I1219 03:24:12.271406  371990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.crt ...
	I1219 03:24:12.271435  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.crt: {Name:mke8fed86df635a05f54420e92870363146991f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.271601  371990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.key ...
	I1219 03:24:12.271612  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.key: {Name:mk39737e3f76352137132fe8060ef391a0d43bc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.271690  371990 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b
	I1219 03:24:12.271717  371990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt.48a05c1b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1219 03:24:12.379475  371990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt.48a05c1b ...
	I1219 03:24:12.379503  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt.48a05c1b: {Name:mkc4d74c8f8c4deb077c8f688d203329a2c5750d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.379662  371990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b ...
	I1219 03:24:12.379675  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b: {Name:mk1b93ad6f4ca843c3104dc76975062dde81eaef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.379761  371990 certs.go:382] copying /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt.48a05c1b -> /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt
	I1219 03:24:12.379853  371990 certs.go:386] copying /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b -> /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key
	I1219 03:24:12.379918  371990 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key
	I1219 03:24:12.379940  371990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt with IP's: []
	I1219 03:24:12.467338  371990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt ...
	I1219 03:24:12.467368  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt: {Name:mk5dc8f653da407b5f14ca799301800eac0952c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.467561  371990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key ...
	I1219 03:24:12.467581  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key: {Name:mk4063cc1af4dbf73c9c390b468c828c35385b5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.467821  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem (1338 bytes)
	W1219 03:24:12.467864  371990 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536_empty.pem, impossibly tiny 0 bytes
	I1219 03:24:12.467875  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:24:12.467901  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:24:12.467925  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:24:12.467953  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 03:24:12.468001  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:24:12.468519  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:24:12.487159  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:24:12.504306  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:24:12.521550  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 03:24:12.538418  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1219 03:24:12.554861  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:24:12.572166  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:24:12.589324  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:24:12.606224  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:24:12.625269  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem --> /usr/share/ca-certificates/8536.pem (1338 bytes)
	I1219 03:24:12.642642  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /usr/share/ca-certificates/85362.pem (1708 bytes)
	I1219 03:24:12.658965  371990 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:24:12.671458  371990 ssh_runner.go:195] Run: openssl version
	I1219 03:24:12.677537  371990 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:12.684496  371990 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:24:12.691660  371990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:12.695495  371990 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:12.695541  371990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:12.730806  371990 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:24:12.738920  371990 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 03:24:12.746295  371990 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8536.pem
	I1219 03:24:12.753462  371990 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8536.pem /etc/ssl/certs/8536.pem
	I1219 03:24:12.760758  371990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8536.pem
	I1219 03:24:12.764356  371990 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:33 /usr/share/ca-certificates/8536.pem
	I1219 03:24:12.764415  371990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8536.pem
	I1219 03:24:12.800484  371990 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:24:12.809192  371990 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8536.pem /etc/ssl/certs/51391683.0
	I1219 03:24:12.816759  371990 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85362.pem
	I1219 03:24:12.825274  371990 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85362.pem /etc/ssl/certs/85362.pem
	I1219 03:24:12.833125  371990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85362.pem
	I1219 03:24:12.836939  371990 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:33 /usr/share/ca-certificates/85362.pem
	I1219 03:24:12.836993  371990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85362.pem
	I1219 03:24:12.871891  371990 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:24:12.879672  371990 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85362.pem /etc/ssl/certs/3ec20f2e.0
	I1219 03:24:12.887040  371990 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:24:12.890648  371990 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1219 03:24:12.890729  371990 kubeadm.go:401] StartCluster: {Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:12.890825  371990 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:24:12.890893  371990 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:24:12.920058  371990 cri.go:92] found id: ""
	I1219 03:24:12.920133  371990 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:24:12.928606  371990 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 03:24:12.936934  371990 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1219 03:24:12.936985  371990 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 03:24:12.945218  371990 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 03:24:12.945240  371990 kubeadm.go:158] found existing configuration files:
	
	I1219 03:24:12.945287  371990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1219 03:24:12.952614  371990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 03:24:12.952666  371990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 03:24:12.960262  371990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1219 03:24:12.967725  371990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 03:24:12.967831  371990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 03:24:12.975015  371990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1219 03:24:12.982506  371990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 03:24:12.982549  371990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 03:24:12.989686  371990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1219 03:24:12.997834  371990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 03:24:12.997888  371990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 03:24:13.005263  371990 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1219 03:24:13.041610  371990 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1219 03:24:13.041730  371990 kubeadm.go:319] [preflight] Running pre-flight checks
	I1219 03:24:13.106822  371990 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1219 03:24:13.106921  371990 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1219 03:24:13.106982  371990 kubeadm.go:319] OS: Linux
	I1219 03:24:13.107046  371990 kubeadm.go:319] CGROUPS_CPU: enabled
	I1219 03:24:13.107146  371990 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1219 03:24:13.107237  371990 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1219 03:24:13.107288  371990 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1219 03:24:13.107344  371990 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1219 03:24:13.107385  371990 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1219 03:24:13.107463  371990 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1219 03:24:13.107538  371990 kubeadm.go:319] CGROUPS_IO: enabled
	I1219 03:24:13.164958  371990 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1219 03:24:13.165152  371990 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1219 03:24:13.165292  371990 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1219 03:24:13.174971  371990 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1219 03:24:13.178028  371990 out.go:252]   - Generating certificates and keys ...
	I1219 03:24:13.178136  371990 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1219 03:24:13.178232  371990 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1219 03:24:13.301903  371990 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1219 03:24:13.387971  371990 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1219 03:24:13.500057  371990 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1219 03:24:13.603458  371990 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1219 03:24:13.636925  371990 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1219 03:24:13.637122  371990 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-837172] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1219 03:24:13.836231  371990 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1219 03:24:13.836371  371990 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-837172] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1219 03:24:14.002346  371990 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1219 03:24:14.032095  371990 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1219 03:24:14.137234  371990 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1219 03:24:14.137362  371990 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1219 03:24:14.167788  371990 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1219 03:24:14.256296  371990 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1219 03:24:14.335846  371990 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1219 03:24:14.409462  371990 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1219 03:24:14.592839  371990 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1219 03:24:14.593412  371990 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1219 03:24:14.597164  371990 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1219 03:24:14.598823  371990 out.go:252]   - Booting up control plane ...
	I1219 03:24:14.598951  371990 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1219 03:24:14.599066  371990 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1219 03:24:14.599695  371990 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1219 03:24:14.613628  371990 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1219 03:24:14.613794  371990 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1219 03:24:14.621414  371990 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1219 03:24:14.621682  371990 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1219 03:24:14.621767  371990 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1219 03:24:14.720948  371990 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1219 03:24:14.721103  371990 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1219 03:24:15.222675  371990 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.8355ms
	I1219 03:24:15.227351  371990 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1219 03:24:15.227489  371990 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1219 03:24:15.227609  371990 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1219 03:24:15.227757  371990 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1219 03:24:16.232434  371990 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004794877s
	I1219 03:24:16.822339  371990 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.594795775s
	I1219 03:24:18.729241  371990 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501609989s
	I1219 03:24:18.747830  371990 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1219 03:24:18.757789  371990 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1219 03:24:18.768843  371990 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1219 03:24:18.769101  371990 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-837172 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1219 03:24:18.777248  371990 kubeadm.go:319] [bootstrap-token] Using token: tjh3gu.t27j0f9f7y1maup8
	I1219 03:24:18.778596  371990 out.go:252]   - Configuring RBAC rules ...
	I1219 03:24:18.778756  371990 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1219 03:24:18.782127  371990 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1219 03:24:18.788723  371990 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1219 03:24:18.791752  371990 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1219 03:24:18.794369  371990 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1219 03:24:18.796980  371990 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1219 03:24:19.135416  371990 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1219 03:24:19.551422  371990 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1219 03:24:20.135668  371990 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1219 03:24:20.136573  371990 kubeadm.go:319] 
	I1219 03:24:20.136667  371990 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1219 03:24:20.136677  371990 kubeadm.go:319] 
	I1219 03:24:20.136815  371990 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1219 03:24:20.136852  371990 kubeadm.go:319] 
	I1219 03:24:20.136883  371990 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1219 03:24:20.136970  371990 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1219 03:24:20.137020  371990 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1219 03:24:20.137026  371990 kubeadm.go:319] 
	I1219 03:24:20.137089  371990 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1219 03:24:20.137101  371990 kubeadm.go:319] 
	I1219 03:24:20.137171  371990 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1219 03:24:20.137179  371990 kubeadm.go:319] 
	I1219 03:24:20.137247  371990 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1219 03:24:20.137362  371990 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1219 03:24:20.137462  371990 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1219 03:24:20.137475  371990 kubeadm.go:319] 
	I1219 03:24:20.137594  371990 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1219 03:24:20.137725  371990 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1219 03:24:20.137741  371990 kubeadm.go:319] 
	I1219 03:24:20.137841  371990 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tjh3gu.t27j0f9f7y1maup8 \
	I1219 03:24:20.137977  371990 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e8b10e60f3db527579d1c34bb8f4e490eb8eee3e7862dee81a2c160635afa3a8 \
	I1219 03:24:20.138014  371990 kubeadm.go:319] 	--control-plane 
	I1219 03:24:20.138022  371990 kubeadm.go:319] 
	I1219 03:24:20.138116  371990 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1219 03:24:20.138124  371990 kubeadm.go:319] 
	I1219 03:24:20.138229  371990 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tjh3gu.t27j0f9f7y1maup8 \
	I1219 03:24:20.138367  371990 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e8b10e60f3db527579d1c34bb8f4e490eb8eee3e7862dee81a2c160635afa3a8 
	I1219 03:24:20.141307  371990 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1219 03:24:20.141417  371990 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1219 03:24:20.141469  371990 cni.go:84] Creating CNI manager for ""
	I1219 03:24:20.141490  371990 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:24:20.143537  371990 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1219 03:24:20.144502  371990 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1219 03:24:20.148822  371990 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl ...
	I1219 03:24:20.148843  371990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1219 03:24:20.161612  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1219 03:24:20.379173  371990 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:24:20.379262  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:20.379275  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-837172 minikube.k8s.io/updated_at=2025_12_19T03_24_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6 minikube.k8s.io/name=newest-cni-837172 minikube.k8s.io/primary=true
	I1219 03:24:20.388746  371990 ops.go:34] apiserver oom_adj: -16
	I1219 03:24:20.454762  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:20.955824  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:21.454834  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:21.954831  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:22.455563  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:22.955820  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:23.454808  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:23.955426  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:24.454807  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:24.521140  371990 kubeadm.go:1114] duration metric: took 4.141930442s to wait for elevateKubeSystemPrivileges
	I1219 03:24:24.521185  371990 kubeadm.go:403] duration metric: took 11.630460792s to StartCluster
	I1219 03:24:24.521209  371990 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:24.521280  371990 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:24:24.522690  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:24.522969  371990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1219 03:24:24.522985  371990 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:24:24.523053  371990 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:24:24.523152  371990 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-837172"
	I1219 03:24:24.523166  371990 addons.go:70] Setting default-storageclass=true in profile "newest-cni-837172"
	I1219 03:24:24.523191  371990 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-837172"
	I1219 03:24:24.523195  371990 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-837172"
	I1219 03:24:24.523231  371990 host.go:66] Checking if "newest-cni-837172" exists ...
	I1219 03:24:24.523251  371990 config.go:182] Loaded profile config "newest-cni-837172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:24.523588  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:24.523773  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:24.524387  371990 out.go:179] * Verifying Kubernetes components...
	I1219 03:24:24.525579  371990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:24.547572  371990 addons.go:239] Setting addon default-storageclass=true in "newest-cni-837172"
	I1219 03:24:24.547634  371990 host.go:66] Checking if "newest-cni-837172" exists ...
	I1219 03:24:24.547832  371990 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:24:24.548129  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:24.552104  371990 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:24:24.552127  371990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:24:24.552183  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:24.578893  371990 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:24:24.579252  371990 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:24:24.579323  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:24.583084  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:24.603726  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:24.615978  371990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1219 03:24:24.668369  371990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:24:24.704139  371990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:24:24.719590  371990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:24:24.803320  371990 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1219 03:24:24.805437  371990 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:24:24.805497  371990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:24:25.029229  371990 api_server.go:72] duration metric: took 506.215716ms to wait for apiserver process to appear ...
	I1219 03:24:25.029261  371990 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:24:25.029282  371990 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:24:25.034829  371990 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1219 03:24:25.035777  371990 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 03:24:25.035813  371990 api_server.go:131] duration metric: took 6.544499ms to wait for apiserver health ...
	I1219 03:24:25.035828  371990 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:24:25.038607  371990 system_pods.go:59] 8 kube-system pods found
	I1219 03:24:25.038639  371990 system_pods.go:61] "coredns-7d764666f9-ckc9j" [5bc3e758-2623-4eae-87fe-a58b932c9e87] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1219 03:24:25.038651  371990 system_pods.go:61] "etcd-newest-cni-837172" [59f28fae-3605-487b-a1b8-c3851c47abac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:24:25.038659  371990 system_pods.go:61] "kindnet-846n4" [b45c7fbd-085c-4972-b312-0973aab68ddc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:24:25.038670  371990 system_pods.go:61] "kube-apiserver-newest-cni-837172" [8d92900e-716d-42ad-9d88-1ca6d0ddf5c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:24:25.038678  371990 system_pods.go:61] "kube-controller-manager-newest-cni-837172" [46b3ad5a-64d1-4e1f-8bdf-ce613dcd6348] Running
	I1219 03:24:25.038684  371990 system_pods.go:61] "kube-proxy-6wg2n" [356cd689-df37-49ac-a3f2-1931978ccf64] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:24:25.038690  371990 system_pods.go:61] "kube-scheduler-newest-cni-837172" [da065d09-cc65-42e7-8e0d-9f9709cafaf9] Running
	I1219 03:24:25.038695  371990 system_pods.go:61] "storage-provisioner" [ba402c27-5828-489f-a656-bc0ef2e8f05e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1219 03:24:25.038713  371990 system_pods.go:74] duration metric: took 2.880877ms to wait for pod list to return data ...
	I1219 03:24:25.038720  371990 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:24:25.038969  371990 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1219 03:24:25.040226  371990 addons.go:546] duration metric: took 517.179033ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1219 03:24:25.040990  371990 default_sa.go:45] found service account: "default"
	I1219 03:24:25.041006  371990 default_sa.go:55] duration metric: took 2.27792ms for default service account to be created ...
	I1219 03:24:25.041015  371990 kubeadm.go:587] duration metric: took 518.007856ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:24:25.041030  371990 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:24:25.043438  371990 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:24:25.043465  371990 node_conditions.go:123] node cpu capacity is 8
	I1219 03:24:25.043494  371990 node_conditions.go:105] duration metric: took 2.45952ms to run NodePressure ...
	I1219 03:24:25.043503  371990 start.go:242] waiting for startup goroutines ...
	I1219 03:24:25.308179  371990 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-837172" context rescaled to 1 replicas
	I1219 03:24:25.308227  371990 start.go:247] waiting for cluster config update ...
	I1219 03:24:25.308241  371990 start.go:256] writing updated cluster config ...
	I1219 03:24:25.308502  371990 ssh_runner.go:195] Run: rm -f paused
	I1219 03:24:25.358553  371990 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1219 03:24:25.360429  371990 out.go:179] * Done! kubectl is now configured to use "newest-cni-837172" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 03:06:10 embed-certs-805185 crio[571]: time="2025-12-19T03:06:10.472463868Z" level=info msg="Created container d14c5a7b642f85cbc69aef96f7b439f2c6a873edd84fe53cafdbf19ba613e885: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf/clear-stale-pid" id=36313b84-f615-418e-a0c2-1800c7ad9bba name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:10 embed-certs-805185 crio[571]: time="2025-12-19T03:06:10.473232027Z" level=info msg="Starting container: d14c5a7b642f85cbc69aef96f7b439f2c6a873edd84fe53cafdbf19ba613e885" id=fa0d9c25-58bb-41e7-a751-32c0a5ee2072 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:10 embed-certs-805185 crio[571]: time="2025-12-19T03:06:10.475578796Z" level=info msg="Started container" PID=1981 containerID=d14c5a7b642f85cbc69aef96f7b439f2c6a873edd84fe53cafdbf19ba613e885 description=kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf/clear-stale-pid id=fa0d9c25-58bb-41e7-a751-32c0a5ee2072 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4243852a152c440419680bef0dfbf6f37d15c21f97f0c7059823f1520f9fc99c
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.135352218Z" level=info msg="Checking image status: kong:3.9" id=b06c69a2-5538-434a-8a72-4f2b223b8bfe name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.135542093Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.137747838Z" level=info msg="Checking image status: kong:3.9" id=9a4a1d08-b9e8-4169-83f7-aec209f5e0b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.13786748Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.142013294Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf/proxy" id=de23cef6-56d6-4c7e-a45c-c26d931492ea name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.142148287Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.148827695Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.149609559Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.189335726Z" level=info msg="Created container 20beadfa950bfa82589018450b7e9a01380f66f6a9eae32a9ce629a265cd5ad2: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf/proxy" id=de23cef6-56d6-4c7e-a45c-c26d931492ea name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.190165238Z" level=info msg="Starting container: 20beadfa950bfa82589018450b7e9a01380f66f6a9eae32a9ce629a265cd5ad2" id=b6a1bf8a-ce95-4b10-adea-c7af131524c4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.192808924Z" level=info msg="Started container" PID=1991 containerID=20beadfa950bfa82589018450b7e9a01380f66f6a9eae32a9ce629a265cd5ad2 description=kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf/proxy id=b6a1bf8a-ce95-4b10-adea-c7af131524c4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4243852a152c440419680bef0dfbf6f37d15c21f97f0c7059823f1520f9fc99c
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.183170694Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=084cd7a4-6ece-4c0a-8397-94465f3314df name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.184121665Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4d531b84-18eb-47e0-aad8-61f09bca340d name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.185241228Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=156d4e77-f8f4-4dbd-a7c4-e7b60cc39d38 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.18538707Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.189952355Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.190095237Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/77220e3b053f54d12f604ca801e81ba41d5ddbdc6900f8b55f5f4338438a1241/merged/etc/passwd: no such file or directory"
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.190117712Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/77220e3b053f54d12f604ca801e81ba41d5ddbdc6900f8b55f5f4338438a1241/merged/etc/group: no such file or directory"
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.190333672Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.231341429Z" level=info msg="Created container 3d7dd245b233f1e33bd4f191102b08c020335fd06eb801c42d94c75e88488904: kube-system/storage-provisioner/storage-provisioner" id=156d4e77-f8f4-4dbd-a7c4-e7b60cc39d38 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.232031749Z" level=info msg="Starting container: 3d7dd245b233f1e33bd4f191102b08c020335fd06eb801c42d94c75e88488904" id=26b83eef-8d30-46ec-89bd-a94e4e05ed3b name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.234124046Z" level=info msg="Started container" PID=3409 containerID=3d7dd245b233f1e33bd4f191102b08c020335fd06eb801c42d94c75e88488904 description=kube-system/storage-provisioner/storage-provisioner id=26b83eef-8d30-46ec-89bd-a94e4e05ed3b name=/runtime.v1.RuntimeService/StartContainer sandboxID=0c1876caf93065afdf67bc083a0b6fc921040c35760414f728f15ba554180160
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	3d7dd245b233f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Running             storage-provisioner                    1                   0c1876caf9306       storage-provisioner                                     kube-system
	20beadfa950bf       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                           18 minutes ago      Running             proxy                                  0                   4243852a152c4       kubernetes-dashboard-kong-9849c64bd-9p6zf               kubernetes-dashboard
	d14c5a7b642f8       docker.io/library/kong@sha256:73ac10ce4d2c5b3b8b4acd6c8117b4e72d1a201d95be2d51aeae8324d776a108                             18 minutes ago      Exited              clear-stale-pid                        0                   4243852a152c4       kubernetes-dashboard-kong-9849c64bd-9p6zf               kubernetes-dashboard
	a0449cd056863       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              18 minutes ago      Running             kubernetes-dashboard-auth              0                   db4923db488cf       kubernetes-dashboard-auth-658884f98f-455ns              kubernetes-dashboard
	95cc887c80866       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               18 minutes ago      Running             kubernetes-dashboard-web               0                   4037dc076fb10       kubernetes-dashboard-web-5c9f966b98-gfhnn               kubernetes-dashboard
	310b39bacccab       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   18 minutes ago      Running             kubernetes-dashboard-metrics-scraper   0                   0be0ce9f85847       kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr   kubernetes-dashboard
	5b4f781150596       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               18 minutes ago      Running             kubernetes-dashboard-api               0                   5af5195e34c00       kubernetes-dashboard-api-78bc857d5c-fljnp               kubernetes-dashboard
	37fd60f84cab5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                           18 minutes ago      Running             coredns                                0                   f0f30eba64edf       coredns-66bc5c9577-8gphx                                kube-system
	e8ff222bdb55d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                                           18 minutes ago      Running             busybox                                1                   523d107bc5d8f       busybox                                                 default
	3e6a9f16432bb       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                           18 minutes ago      Running             kube-proxy                             0                   4fb4de09d3b1c       kube-proxy-p8pqg                                        kube-system
	3df3cb7877110       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Exited              storage-provisioner                    0                   0c1876caf9306       storage-provisioner                                     kube-system
	9734264bc0316       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                                           18 minutes ago      Running             kindnet-cni                            0                   e566763b65b28       kindnet-jj9ms                                           kube-system
	dca8f84f406b7       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                           18 minutes ago      Running             kube-controller-manager                0                   1479078fc9c08       kube-controller-manager-embed-certs-805185              kube-system
	c0e9c22a25238       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                           18 minutes ago      Running             kube-scheduler                         0                   49e7ef6075ae3       kube-scheduler-embed-certs-805185                       kube-system
	e4f794af7924e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                           18 minutes ago      Running             etcd                                   0                   c8ef977665655       etcd-embed-certs-805185                                 kube-system
	fa9a88171fdc7       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                           18 minutes ago      Running             kube-apiserver                         0                   d92a0248993ee       kube-apiserver-embed-certs-805185                       kube-system
	
	
	==> coredns [37fd60f84cab5a40d06b06eda266df17eadd8d0a9ee56f7b235782087ec0083a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40097 - 29931 "HINFO IN 2735309851509519627.415811791505313667. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.415024708s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-805185
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-805185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=embed-certs-805185
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_04_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:04:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-805185
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:24:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:24:15 +0000   Fri, 19 Dec 2025 03:04:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:24:15 +0000   Fri, 19 Dec 2025 03:04:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:24:15 +0000   Fri, 19 Dec 2025 03:04:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:24:15 +0000   Fri, 19 Dec 2025 03:05:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-805185
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                e529c61b-35ad-4151-ab38-525026482d8c
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-8gphx                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-embed-certs-805185                                  100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-jj9ms                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-embed-certs-805185                        250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-embed-certs-805185               200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-p8pqg                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-embed-certs-805185                        100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        kubernetes-dashboard-api-78bc857d5c-fljnp                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-auth-658884f98f-455ns               100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-9p6zf                0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-gfhnn                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1250m (15%)  1100m (13%)
	  memory             1020Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node embed-certs-805185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node embed-certs-805185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m                kubelet          Node embed-certs-805185 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-805185 event: Registered Node embed-certs-805185 in Controller
	  Normal  NodeReady                19m                kubelet          Node embed-certs-805185 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node embed-certs-805185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node embed-certs-805185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node embed-certs-805185 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node embed-certs-805185 event: Registered Node embed-certs-805185 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [e4f794af7924e48700f3eb1f53c1070c15bc99d17539d5f097c1a7c62dded81f] <==
	{"level":"warn","ts":"2025-12-19T03:05:53.719221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.745613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.755575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.779584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.825911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.666523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.686420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.703183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.714636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.724682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.735837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.746037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.755589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.784157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.802436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.825473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58754","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T03:06:04.808381Z","caller":"traceutil/trace.go:172","msg":"trace[24513416] transaction","detail":"{read_only:false; response_revision:699; number_of_response:1; }","duration":"118.600036ms","start":"2025-12-19T03:06:04.689759Z","end":"2025-12-19T03:06:04.808359Z","steps":["trace[24513416] 'process raft request'  (duration: 118.551956ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:06:04.808596Z","caller":"traceutil/trace.go:172","msg":"trace[1604688651] transaction","detail":"{read_only:false; response_revision:698; number_of_response:1; }","duration":"178.640288ms","start":"2025-12-19T03:06:04.629933Z","end":"2025-12-19T03:06:04.808573Z","steps":["trace[1604688651] 'process raft request'  (duration: 128.977486ms)","trace[1604688651] 'compare'  (duration: 49.259539ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:06:10.029004Z","caller":"traceutil/trace.go:172","msg":"trace[1715983664] transaction","detail":"{read_only:false; response_revision:712; number_of_response:1; }","duration":"117.29944ms","start":"2025-12-19T03:06:09.911684Z","end":"2025-12-19T03:06:10.028983Z","steps":["trace[1715983664] 'process raft request'  (duration: 95.039156ms)","trace[1715983664] 'compare'  (duration: 21.881704ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:15:53.166470Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":959}
	{"level":"info","ts":"2025-12-19T03:15:53.173813Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":959,"took":"6.970165ms","hash":136659999,"current-db-size-bytes":3895296,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":3895296,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2025-12-19T03:15:53.173870Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":136659999,"revision":959,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:20:53.171463Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1202}
	{"level":"info","ts":"2025-12-19T03:20:53.173821Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1202,"took":"1.992974ms","hash":2951296099,"current-db-size-bytes":3895296,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":2015232,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-12-19T03:20:53.173858Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2951296099,"revision":1202,"compact-revision":959}
	
	
	==> kernel <==
	 03:24:36 up  1:07,  0 user,  load average: 1.93, 0.90, 1.23
	Linux embed-certs-805185 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9734264bc03165e973381a11181db3d0d85532eb608a1d648d545affcc0f5657] <==
	I1219 03:22:35.868429       1 main.go:301] handling current node
	I1219 03:22:45.867952       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:22:45.867995       1 main.go:301] handling current node
	I1219 03:22:55.871868       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:22:55.871903       1 main.go:301] handling current node
	I1219 03:23:05.872806       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:23:05.872843       1 main.go:301] handling current node
	I1219 03:23:15.868177       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:23:15.868210       1 main.go:301] handling current node
	I1219 03:23:25.867534       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:23:25.867573       1 main.go:301] handling current node
	I1219 03:23:35.867892       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:23:35.867944       1 main.go:301] handling current node
	I1219 03:23:45.874749       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:23:45.874784       1 main.go:301] handling current node
	I1219 03:23:55.871842       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:23:55.871874       1 main.go:301] handling current node
	I1219 03:24:05.867919       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:24:05.867959       1 main.go:301] handling current node
	I1219 03:24:15.868601       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:24:15.868645       1 main.go:301] handling current node
	I1219 03:24:25.868249       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:24:25.868398       1 main.go:301] handling current node
	I1219 03:24:35.867612       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:24:35.867672       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fa9a88171fdc75e01df96259a9096dab5e5ab76217553f36b6a9922f9e0f06fe] <==
	W1219 03:05:57.666179       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 03:05:57.686342       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.703087       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.714554       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.724651       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 03:05:57.735825       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.745925       1 logging.go:55] [core] [Channel #283 SubChannel #284]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.755549       1 logging.go:55] [core] [Channel #287 SubChannel #288]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.773268       1 logging.go:55] [core] [Channel #291 SubChannel #292]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.784117       1 logging.go:55] [core] [Channel #295 SubChannel #296]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1219 03:05:57.795282       1 controller.go:667] quota admission added evaluator for: endpoints
	W1219 03:05:57.802417       1 logging.go:55] [core] [Channel #299 SubChannel #300]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.819295       1 logging.go:55] [core] [Channel #303 SubChannel #304]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1219 03:05:57.894304       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:05:57.991073       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:05:58.143944       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 03:05:58.544436       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 03:05:58.579983       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:05:58.584890       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:05:58.595427       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.101.245.250"}
	I1219 03:05:58.600356       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.97.48.46"}
	I1219 03:05:58.604096       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.96.197.102"}
	I1219 03:05:58.610018       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.99.175"}
	I1219 03:05:58.616775       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.106.250.73"}
	I1219 03:15:54.401313       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [dca8f84f406b7acd8227404694ece4fd29d232591939f26e4325c52e7c00de60] <==
	I1219 03:05:57.736964       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1219 03:05:57.737011       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1219 03:05:57.737131       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1219 03:05:57.737248       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1219 03:05:57.737588       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 03:05:57.737617       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1219 03:05:57.738742       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 03:05:57.738773       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 03:05:57.738742       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1219 03:05:57.744005       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1219 03:05:57.744039       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1219 03:05:57.744147       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1219 03:05:57.744203       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1219 03:05:57.744212       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1219 03:05:57.744220       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1219 03:05:57.746255       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1219 03:05:57.747424       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1219 03:05:57.753898       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1219 03:05:57.755198       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 03:05:58.841753       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:05:58.868581       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:05:58.874821       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:05:58.881981       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:05:58.882003       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 03:05:58.882012       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [3e6a9f16432bb2d0f57c9e657b776eaae753f9a9bc474bcd825b022f2cf4726b] <==
	I1219 03:05:55.448309       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:05:55.528222       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:05:55.628850       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:05:55.628898       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1219 03:05:55.629015       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:05:55.649512       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:05:55.649574       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:05:55.655220       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:05:55.655665       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:05:55.655695       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:55.657141       1 config.go:200] "Starting service config controller"
	I1219 03:05:55.657618       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:05:55.657697       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:05:55.657751       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:05:55.658014       1 config.go:309] "Starting node config controller"
	I1219 03:05:55.658027       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:05:55.658041       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:05:55.658491       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:05:55.658532       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:05:55.757856       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:05:55.759651       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:05:55.759720       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c0e9c22a2523807e95fb727795c040c95c5bd029feb66a6a92f7087e4503774e] <==
	I1219 03:05:53.750115       1 serving.go:386] Generated self-signed cert in-memory
	I1219 03:05:54.696153       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:05:54.696180       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:54.700571       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:54.700567       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1219 03:05:54.700623       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:54.700627       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1219 03:05:54.700603       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:05:54.700660       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:05:54.701061       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:05:54.701240       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:05:54.801652       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:54.801670       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:05:54.801652       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.784900     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c86f0bcd-5dae-4bb5-b1c2-bd9a3bea9060-tmp-volume\") pod \"kubernetes-dashboard-auth-658884f98f-455ns\" (UID: \"c86f0bcd-5dae-4bb5-b1c2-bd9a3bea9060\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-658884f98f-455ns"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.784992     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-prefix-dir\" (UniqueName: \"kubernetes.io/empty-dir/30a45022-1901-4ea6-8857-08ff9a85c27a-kubernetes-dashboard-kong-prefix-dir\") pod \"kubernetes-dashboard-kong-9849c64bd-9p6zf\" (UID: \"30a45022-1901-4ea6-8857-08ff9a85c27a\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785031     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47j2v\" (UniqueName: \"kubernetes.io/projected/f73d26a9-48d2-47fc-a241-1a7504297988-kube-api-access-47j2v\") pod \"kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr\" (UID: \"f73d26a9-48d2-47fc-a241-1a7504297988\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785063     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2c9c9b86-fd2a-4420-b98d-27dd078fe2c6-tmp-volume\") pod \"kubernetes-dashboard-web-5c9f966b98-gfhnn\" (UID: \"2c9c9b86-fd2a-4420-b98d-27dd078fe2c6\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-gfhnn"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785080     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-474hq\" (UniqueName: \"kubernetes.io/projected/c86f0bcd-5dae-4bb5-b1c2-bd9a3bea9060-kube-api-access-474hq\") pod \"kubernetes-dashboard-auth-658884f98f-455ns\" (UID: \"c86f0bcd-5dae-4bb5-b1c2-bd9a3bea9060\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-658884f98f-455ns"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785095     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ab309a53-9e4b-4a01-899a-797c7ba5208d-tmp-volume\") pod \"kubernetes-dashboard-api-78bc857d5c-fljnp\" (UID: \"ab309a53-9e4b-4a01-899a-797c7ba5208d\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-78bc857d5c-fljnp"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785116     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zzfm\" (UniqueName: \"kubernetes.io/projected/ab309a53-9e4b-4a01-899a-797c7ba5208d-kube-api-access-6zzfm\") pod \"kubernetes-dashboard-api-78bc857d5c-fljnp\" (UID: \"ab309a53-9e4b-4a01-899a-797c7ba5208d\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-78bc857d5c-fljnp"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785138     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f73d26a9-48d2-47fc-a241-1a7504297988-tmp-volume\") pod \"kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr\" (UID: \"f73d26a9-48d2-47fc-a241-1a7504297988\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785164     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7smc\" (UniqueName: \"kubernetes.io/projected/2c9c9b86-fd2a-4420-b98d-27dd078fe2c6-kube-api-access-k7smc\") pod \"kubernetes-dashboard-web-5c9f966b98-gfhnn\" (UID: \"2c9c9b86-fd2a-4420-b98d-27dd078fe2c6\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-gfhnn"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785222     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-tmp\" (UniqueName: \"kubernetes.io/empty-dir/30a45022-1901-4ea6-8857-08ff9a85c27a-kubernetes-dashboard-kong-tmp\") pod \"kubernetes-dashboard-kong-9849c64bd-9p6zf\" (UID: \"30a45022-1901-4ea6-8857-08ff9a85c27a\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf"
	Dec 19 03:05:59 embed-certs-805185 kubelet[737]: I1219 03:05:59.997824     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:59 embed-certs-805185 kubelet[737]: I1219 03:05:59.997922     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:00 embed-certs-805185 kubelet[737]: I1219 03:06:00.037195     737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 19 03:06:00 embed-certs-805185 kubelet[737]: I1219 03:06:00.097959     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-api-78bc857d5c-fljnp" podStartSLOduration=1.09098601 podStartE2EDuration="2.097935412s" podCreationTimestamp="2025-12-19 03:05:58 +0000 UTC" firstStartedPulling="2025-12-19 03:05:58.990618466 +0000 UTC m=+7.051227125" lastFinishedPulling="2025-12-19 03:05:59.997567856 +0000 UTC m=+8.058176527" observedRunningTime="2025-12-19 03:06:00.097689886 +0000 UTC m=+8.158298580" watchObservedRunningTime="2025-12-19 03:06:00.097935412 +0000 UTC m=+8.158544082"
	Dec 19 03:06:00 embed-certs-805185 kubelet[737]: I1219 03:06:00.934970     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:00 embed-certs-805185 kubelet[737]: I1219 03:06:00.936003     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:02 embed-certs-805185 kubelet[737]: I1219 03:06:02.793612     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr" podStartSLOduration=2.864491069 podStartE2EDuration="4.793587364s" podCreationTimestamp="2025-12-19 03:05:58 +0000 UTC" firstStartedPulling="2025-12-19 03:05:59.005628182 +0000 UTC m=+7.066236856" lastFinishedPulling="2025-12-19 03:06:00.934724484 +0000 UTC m=+8.995333151" observedRunningTime="2025-12-19 03:06:01.111916375 +0000 UTC m=+9.172525051" watchObservedRunningTime="2025-12-19 03:06:02.793587364 +0000 UTC m=+10.854196040"
	Dec 19 03:06:04 embed-certs-805185 kubelet[737]: I1219 03:06:04.028076     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:04 embed-certs-805185 kubelet[737]: I1219 03:06:04.028167     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:04 embed-certs-805185 kubelet[737]: I1219 03:06:04.121599     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-gfhnn" podStartSLOduration=1.100576683 podStartE2EDuration="6.121572519s" podCreationTimestamp="2025-12-19 03:05:58 +0000 UTC" firstStartedPulling="2025-12-19 03:05:59.006841332 +0000 UTC m=+7.067449988" lastFinishedPulling="2025-12-19 03:06:04.027837166 +0000 UTC m=+12.088445824" observedRunningTime="2025-12-19 03:06:04.121201067 +0000 UTC m=+12.181809743" watchObservedRunningTime="2025-12-19 03:06:04.121572519 +0000 UTC m=+12.182181195"
	Dec 19 03:06:05 embed-certs-805185 kubelet[737]: I1219 03:06:05.244202     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:05 embed-certs-805185 kubelet[737]: I1219 03:06:05.244300     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:06 embed-certs-805185 kubelet[737]: I1219 03:06:06.135487     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-auth-658884f98f-455ns" podStartSLOduration=1.904186191 podStartE2EDuration="8.135456486s" podCreationTimestamp="2025-12-19 03:05:58 +0000 UTC" firstStartedPulling="2025-12-19 03:05:59.012692427 +0000 UTC m=+7.073301081" lastFinishedPulling="2025-12-19 03:06:05.243962705 +0000 UTC m=+13.304571376" observedRunningTime="2025-12-19 03:06:06.134881051 +0000 UTC m=+14.195489728" watchObservedRunningTime="2025-12-19 03:06:06.135456486 +0000 UTC m=+14.196065161"
	Dec 19 03:06:12 embed-certs-805185 kubelet[737]: I1219 03:06:12.162006     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf" podStartSLOduration=2.749011678 podStartE2EDuration="14.161975971s" podCreationTimestamp="2025-12-19 03:05:58 +0000 UTC" firstStartedPulling="2025-12-19 03:05:59.023057738 +0000 UTC m=+7.083666406" lastFinishedPulling="2025-12-19 03:06:10.436022033 +0000 UTC m=+18.496630699" observedRunningTime="2025-12-19 03:06:12.161201474 +0000 UTC m=+20.221810169" watchObservedRunningTime="2025-12-19 03:06:12.161975971 +0000 UTC m=+20.222584647"
	Dec 19 03:06:26 embed-certs-805185 kubelet[737]: I1219 03:06:26.182763     737 scope.go:117] "RemoveContainer" containerID="3df3cb787711062528813854036926a57363b917a00c81ef68c3b8c0a675cfa2"
	
	
	==> kubernetes-dashboard [310b39bacccabe01a7800d05d30675f93096703212a17f66095da8c1865d22d2] <==
	10.244.0.1 - - [19/Dec/2025:03:21:56 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:00 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:06 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:30 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:46 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:56 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:00 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:06 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:30 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:46 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:56 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:00 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:24:06 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:30 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	E1219 03:22:01.082390       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:23:01.082525       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:24:01.082114       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	
	
	==> kubernetes-dashboard [5b4f7811505964d9e14b039acff4c61a760a6112e63bfff6242995499ee3b049] <==
	I1219 03:06:00.157650       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:06:00.157768       1 init.go:49] Using in-cluster config
	I1219 03:06:00.158043       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:06:00.158057       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:06:00.158064       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:06:00.158072       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:06:00.164066       1 main.go:119] "Successful initial request to the apiserver" version="v1.34.3"
	I1219 03:06:00.164098       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:06:00.190400       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 03:06:00.190937       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	I1219 03:06:30.196244       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [95cc887c80866d0ea33ef79f7654625e51e2590ee08a32fae89a8d46347f529a] <==
	I1219 03:06:04.155476       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:06:04.155552       1 init.go:48] Using in-cluster config
	I1219 03:06:04.155767       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [a0449cd05686367a0a816405c686858df4a264fbcacf43407705baff34ccbc5a] <==
	I1219 03:06:05.338222       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:06:05.338287       1 init.go:49] Using in-cluster config
	I1219 03:06:05.338471       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [3d7dd245b233f1e33bd4f191102b08c020335fd06eb801c42d94c75e88488904] <==
	W1219 03:24:11.868861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:13.872131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:13.876128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:15.880002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:15.884269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:17.887577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:17.891794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:19.895638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:19.899375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:21.903213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:21.907243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:23.910143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:23.914640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:25.918600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:25.924444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:27.928290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:27.932914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:29.935848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:29.941274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:31.944766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:31.948619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:33.952001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:33.956116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:35.959480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:35.963533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [3df3cb787711062528813854036926a57363b917a00c81ef68c3b8c0a675cfa2] <==
	I1219 03:05:55.403581       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:06:25.407035       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-805185 -n embed-certs-805185
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-805185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 03:15:52.635534    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:16:00.492208    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/kindnet-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:16:38.531180    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:17:38.270088    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/enable-default-cni-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:17:42.700282    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:17:48.485878    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/bridge-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:18:59.336716    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/calico-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:19:09.207853    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/custom-flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:19:48.777074    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:20:02.345030    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:20:52.635212    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:21:00.492492    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/kindnet-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:21:11.827920    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:21:21.583602    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:21:38.530996    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:22:15.683353    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:22:23.539803    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/kindnet-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:22:38.269061    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/enable-default-cni-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:22:42.699994    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:22:48.485845    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/bridge-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-717222 -n default-k8s-diff-port-717222
start_stop_delete_test.go:285: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:24:41.906020934 +0000 UTC m=+3591.625540321
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-717222 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-717222 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (63.952043ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "dashboard-metrics-scraper" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-717222 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-717222
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-717222:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59",
	        "Created": "2025-12-19T03:04:47.206515223Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 352308,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:05:53.385310779Z",
	            "FinishedAt": "2025-12-19T03:05:52.262245388Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59/hostname",
	        "HostsPath": "/var/lib/docker/containers/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59/hosts",
	        "LogPath": "/var/lib/docker/containers/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59-json.log",
	        "Name": "/default-k8s-diff-port-717222",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-717222:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-717222",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59",
	                "LowerDir": "/var/lib/docker/overlay2/73dc0bcaa024ed1b32e53abc59282bb71441b29d555c8c9ed18118be21650e76-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/73dc0bcaa024ed1b32e53abc59282bb71441b29d555c8c9ed18118be21650e76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/73dc0bcaa024ed1b32e53abc59282bb71441b29d555c8c9ed18118be21650e76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/73dc0bcaa024ed1b32e53abc59282bb71441b29d555c8c9ed18118be21650e76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-717222",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-717222/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-717222",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-717222",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-717222",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9d06f7aea24e94d05365ef4f03fb5f64c6b5272dae79bd49619bd1821269410e",
	            "SandboxKey": "/var/run/docker/netns/9d06f7aea24e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-717222": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "61bece957d17b845e006f35e9e337693d4d396daf2e4f93e70692be3f3288cbb",
	                    "EndpointID": "2c278581ff3b356f6bebafb94e691fc066cab71fa7bdd973be671471a23efca1",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ae:9c:c1:61:6a:e8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-717222",
	                        "f8284300a033"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-717222 -n default-k8s-diff-port-717222
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-717222 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-717222 logs -n 25: (1.296507947s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable dashboard -p old-k8s-version-433330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-433330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p no-preload-278042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-278042 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p embed-certs-805185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p embed-certs-805185 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-717222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-717222 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-805185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-717222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ image   │ old-k8s-version-433330 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p old-k8s-version-433330 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ image   │ no-preload-278042 image list --format=json                                                                                                                                                                                                         │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p no-preload-278042 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ delete  │ -p old-k8s-version-433330                                                                                                                                                                                                                          │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p old-k8s-version-433330                                                                                                                                                                                                                          │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ start   │ -p newest-cni-837172 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p no-preload-278042                                                                                                                                                                                                                               │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p no-preload-278042                                                                                                                                                                                                                               │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ addons  │ enable metrics-server -p newest-cni-837172 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	│ stop    │ -p newest-cni-837172 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	│ image   │ embed-certs-805185 image list --format=json                                                                                                                                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ pause   │ -p embed-certs-805185 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:24:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:24:01.036023  371990 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:24:01.036565  371990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:01.036582  371990 out.go:374] Setting ErrFile to fd 2...
	I1219 03:24:01.036589  371990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:01.037114  371990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:24:01.038234  371990 out.go:368] Setting JSON to false
	I1219 03:24:01.039510  371990 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3992,"bootTime":1766110649,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:24:01.039592  371990 start.go:143] virtualization: kvm guest
	I1219 03:24:01.041656  371990 out.go:179] * [newest-cni-837172] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:24:01.043211  371990 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:24:01.043253  371990 notify.go:221] Checking for updates...
	I1219 03:24:01.045604  371990 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:24:01.046873  371990 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:24:01.047985  371990 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:24:01.052214  371990 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:24:01.053413  371990 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:24:01.055079  371990 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:24:01.055198  371990 config.go:182] Loaded profile config "embed-certs-805185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:24:01.055324  371990 config.go:182] Loaded profile config "no-preload-278042": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:01.055430  371990 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:24:01.080518  371990 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:24:01.080672  371990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:01.143010  371990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-19 03:24:01.132535066 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:01.143105  371990 docker.go:319] overlay module found
	I1219 03:24:01.144954  371990 out.go:179] * Using the docker driver based on user configuration
	I1219 03:24:01.146278  371990 start.go:309] selected driver: docker
	I1219 03:24:01.146299  371990 start.go:928] validating driver "docker" against <nil>
	I1219 03:24:01.146315  371990 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:24:01.147198  371990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:01.207023  371990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-19 03:24:01.196664778 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:01.207180  371990 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1219 03:24:01.207207  371990 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1219 03:24:01.207525  371990 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:24:01.209632  371990 out.go:179] * Using Docker driver with root privileges
	I1219 03:24:01.210891  371990 cni.go:84] Creating CNI manager for ""
	I1219 03:24:01.210974  371990 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:24:01.210985  371990 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1219 03:24:01.211049  371990 start.go:353] cluster config:
	{Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:01.212320  371990 out.go:179] * Starting "newest-cni-837172" primary control-plane node in "newest-cni-837172" cluster
	I1219 03:24:01.213422  371990 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:24:01.214779  371990 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:24:01.215953  371990 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:24:01.216006  371990 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1219 03:24:01.216025  371990 cache.go:65] Caching tarball of preloaded images
	I1219 03:24:01.216047  371990 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:24:01.216120  371990 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:24:01.216133  371990 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1219 03:24:01.216218  371990 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/config.json ...
	I1219 03:24:01.216239  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/config.json: {Name:mkf2bb7657c731e279d378a607e1a523b320a47e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:01.237349  371990 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:24:01.237368  371990 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:24:01.237386  371990 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:24:01.237420  371990 start.go:360] acquireMachinesLock for newest-cni-837172: {Name:mk7db0c3e7d0fad2e8a414af15e136d817446719 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:24:01.237512  371990 start.go:364] duration metric: took 75.602µs to acquireMachinesLock for "newest-cni-837172"
	I1219 03:24:01.237534  371990 start.go:93] Provisioning new machine with config: &{Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:24:01.237590  371990 start.go:125] createHost starting for "" (driver="docker")
	I1219 03:24:01.239751  371990 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1219 03:24:01.239974  371990 start.go:159] libmachine.API.Create for "newest-cni-837172" (driver="docker")
	I1219 03:24:01.240017  371990 client.go:173] LocalClient.Create starting
	I1219 03:24:01.240087  371990 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem
	I1219 03:24:01.240117  371990 main.go:144] libmachine: Decoding PEM data...
	I1219 03:24:01.240136  371990 main.go:144] libmachine: Parsing certificate...
	I1219 03:24:01.240185  371990 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem
	I1219 03:24:01.240204  371990 main.go:144] libmachine: Decoding PEM data...
	I1219 03:24:01.240213  371990 main.go:144] libmachine: Parsing certificate...
	I1219 03:24:01.240512  371990 cli_runner.go:164] Run: docker network inspect newest-cni-837172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1219 03:24:01.257883  371990 cli_runner.go:211] docker network inspect newest-cni-837172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1219 03:24:01.258008  371990 network_create.go:284] running [docker network inspect newest-cni-837172] to gather additional debugging logs...
	I1219 03:24:01.258034  371990 cli_runner.go:164] Run: docker network inspect newest-cni-837172
	W1219 03:24:01.275377  371990 cli_runner.go:211] docker network inspect newest-cni-837172 returned with exit code 1
	I1219 03:24:01.275412  371990 network_create.go:287] error running [docker network inspect newest-cni-837172]: docker network inspect newest-cni-837172: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-837172 not found
	I1219 03:24:01.275429  371990 network_create.go:289] output of [docker network inspect newest-cni-837172]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-837172 not found
	
	** /stderr **
	I1219 03:24:01.275535  371990 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:24:01.294388  371990 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d70e62b79a31 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:cf:22:72:cb:a0} reservation:<nil>}
	I1219 03:24:01.295272  371990 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-980aea652065 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ba:dd:9c:97:fb:7d} reservation:<nil>}
	I1219 03:24:01.296258  371990 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-42b42f6a5044 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a2:1e:31:1b:21:84} reservation:<nil>}
	I1219 03:24:01.297569  371990 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec48c0}
	I1219 03:24:01.297599  371990 network_create.go:124] attempt to create docker network newest-cni-837172 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1219 03:24:01.297651  371990 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-837172 newest-cni-837172
	I1219 03:24:01.350655  371990 network_create.go:108] docker network newest-cni-837172 192.168.76.0/24 created
	I1219 03:24:01.350682  371990 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-837172" container
	I1219 03:24:01.350794  371990 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1219 03:24:01.370331  371990 cli_runner.go:164] Run: docker volume create newest-cni-837172 --label name.minikube.sigs.k8s.io=newest-cni-837172 --label created_by.minikube.sigs.k8s.io=true
	I1219 03:24:01.391519  371990 oci.go:103] Successfully created a docker volume newest-cni-837172
	I1219 03:24:01.391624  371990 cli_runner.go:164] Run: docker run --rm --name newest-cni-837172-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-837172 --entrypoint /usr/bin/test -v newest-cni-837172:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1219 03:24:01.840345  371990 oci.go:107] Successfully prepared a docker volume newest-cni-837172
	I1219 03:24:01.840449  371990 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:24:01.840465  371990 kic.go:194] Starting extracting preloaded images to volume ...
	I1219 03:24:01.840529  371990 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-837172:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1219 03:24:05.697885  371990 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-837172:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.857303195s)
	I1219 03:24:05.697924  371990 kic.go:203] duration metric: took 3.857455339s to extract preloaded images to volume ...
	W1219 03:24:05.698024  371990 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1219 03:24:05.698058  371990 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1219 03:24:05.698100  371990 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1219 03:24:05.757547  371990 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-837172 --name newest-cni-837172 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-837172 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-837172 --network newest-cni-837172 --ip 192.168.76.2 --volume newest-cni-837172:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1219 03:24:06.051568  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Running}}
	I1219 03:24:06.072261  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:06.093313  371990 cli_runner.go:164] Run: docker exec newest-cni-837172 stat /var/lib/dpkg/alternatives/iptables
	I1219 03:24:06.144238  371990 oci.go:144] the created container "newest-cni-837172" has a running status.
	I1219 03:24:06.144278  371990 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa...
	I1219 03:24:06.230796  371990 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1219 03:24:06.256299  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:06.273734  371990 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1219 03:24:06.273758  371990 kic_runner.go:114] Args: [docker exec --privileged newest-cni-837172 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1219 03:24:06.341522  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:06.363532  371990 machine.go:94] provisionDockerMachine start ...
	I1219 03:24:06.363655  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:06.390168  371990 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:06.390536  371990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1219 03:24:06.390552  371990 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:24:06.391620  371990 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34054->127.0.0.1:33138: read: connection reset by peer
	I1219 03:24:09.536680  371990 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-837172
	
	I1219 03:24:09.536733  371990 ubuntu.go:182] provisioning hostname "newest-cni-837172"
	I1219 03:24:09.536797  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:09.555045  371990 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:09.555325  371990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1219 03:24:09.555340  371990 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-837172 && echo "newest-cni-837172" | sudo tee /etc/hostname
	I1219 03:24:09.709116  371990 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-837172
	
	I1219 03:24:09.709183  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:09.727847  371990 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:09.728289  371990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1219 03:24:09.728322  371990 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-837172' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-837172/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-837172' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:24:09.871486  371990 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:24:09.871529  371990 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 03:24:09.871588  371990 ubuntu.go:190] setting up certificates
	I1219 03:24:09.871600  371990 provision.go:84] configureAuth start
	I1219 03:24:09.871666  371990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-837172
	I1219 03:24:09.890551  371990 provision.go:143] copyHostCerts
	I1219 03:24:09.890608  371990 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem, removing ...
	I1219 03:24:09.890616  371990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem
	I1219 03:24:09.890710  371990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 03:24:09.890819  371990 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem, removing ...
	I1219 03:24:09.890829  371990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem
	I1219 03:24:09.890867  371990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 03:24:09.890920  371990 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem, removing ...
	I1219 03:24:09.890933  371990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem
	I1219 03:24:09.890959  371990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 03:24:09.891015  371990 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.newest-cni-837172 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-837172]
	I1219 03:24:09.923962  371990 provision.go:177] copyRemoteCerts
	I1219 03:24:09.924021  371990 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:24:09.924055  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:09.943177  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.046012  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:24:10.066001  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1219 03:24:10.083456  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 03:24:10.101464  371990 provision.go:87] duration metric: took 229.847544ms to configureAuth
	I1219 03:24:10.101492  371990 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:24:10.101673  371990 config.go:182] Loaded profile config "newest-cni-837172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:10.101801  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.120532  371990 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:10.120821  371990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1219 03:24:10.120839  371990 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:24:10.410477  371990 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:24:10.410502  371990 machine.go:97] duration metric: took 4.046944113s to provisionDockerMachine
	I1219 03:24:10.410513  371990 client.go:176] duration metric: took 9.170488353s to LocalClient.Create
	I1219 03:24:10.410535  371990 start.go:167] duration metric: took 9.170561433s to libmachine.API.Create "newest-cni-837172"
	I1219 03:24:10.410546  371990 start.go:293] postStartSetup for "newest-cni-837172" (driver="docker")
	I1219 03:24:10.410559  371990 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:24:10.410613  371990 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:24:10.410664  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.430222  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.533641  371990 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:24:10.537745  371990 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:24:10.537783  371990 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:24:10.537806  371990 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 03:24:10.537857  371990 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 03:24:10.537934  371990 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem -> 85362.pem in /etc/ssl/certs
	I1219 03:24:10.538030  371990 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:24:10.545818  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:24:10.566832  371990 start.go:296] duration metric: took 156.272185ms for postStartSetup
	I1219 03:24:10.567244  371990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-837172
	I1219 03:24:10.586641  371990 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/config.json ...
	I1219 03:24:10.586934  371990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:24:10.586987  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.604894  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.703924  371990 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:24:10.708480  371990 start.go:128] duration metric: took 9.470874061s to createHost
	I1219 03:24:10.708519  371990 start.go:83] releasing machines lock for "newest-cni-837172", held for 9.47099552s
	I1219 03:24:10.708596  371990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-837172
	I1219 03:24:10.727823  371990 ssh_runner.go:195] Run: cat /version.json
	I1219 03:24:10.727853  371990 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:24:10.727877  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.727922  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.748155  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.748577  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.899556  371990 ssh_runner.go:195] Run: systemctl --version
	I1219 03:24:10.906157  371990 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:24:10.942010  371990 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:24:10.946776  371990 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:24:10.946834  371990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:24:10.972921  371990 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 03:24:10.972943  371990 start.go:496] detecting cgroup driver to use...
	I1219 03:24:10.972971  371990 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:24:10.973032  371990 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:24:10.989146  371990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:24:11.002203  371990 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:24:11.002282  371990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:24:11.018422  371990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:24:11.035554  371990 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:24:11.119919  371990 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:24:11.207179  371990 docker.go:234] disabling docker service ...
	I1219 03:24:11.207252  371990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:24:11.225572  371990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:24:11.237859  371990 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:24:11.323024  371990 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:24:11.407303  371990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:24:11.419524  371990 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:24:11.433341  371990 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:24:11.433395  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.443408  371990 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 03:24:11.443468  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.452460  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.460889  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.469451  371990 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:24:11.477277  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.485766  371990 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.499106  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.508174  371990 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:24:11.515313  371990 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:24:11.522319  371990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:11.604796  371990 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:24:11.746317  371990 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:24:11.746376  371990 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:24:11.750220  371990 start.go:564] Will wait 60s for crictl version
	I1219 03:24:11.750278  371990 ssh_runner.go:195] Run: which crictl
	I1219 03:24:11.753821  371990 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:24:11.777608  371990 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 03:24:11.777714  371990 ssh_runner.go:195] Run: crio --version
	I1219 03:24:11.804073  371990 ssh_runner.go:195] Run: crio --version
	I1219 03:24:11.833640  371990 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1219 03:24:11.834886  371990 cli_runner.go:164] Run: docker network inspect newest-cni-837172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:24:11.852567  371990 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1219 03:24:11.856667  371990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:24:11.871316  371990 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1219 03:24:11.872497  371990 kubeadm.go:884] updating cluster {Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:24:11.872642  371990 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:24:11.872692  371990 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:24:11.904183  371990 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:24:11.904204  371990 crio.go:433] Images already preloaded, skipping extraction
	I1219 03:24:11.904263  371990 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:24:11.930999  371990 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:24:11.931020  371990 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:24:11.931026  371990 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1219 03:24:11.931148  371990 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-837172 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:24:11.931228  371990 ssh_runner.go:195] Run: crio config
	I1219 03:24:11.976472  371990 cni.go:84] Creating CNI manager for ""
	I1219 03:24:11.976491  371990 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:24:11.976503  371990 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1219 03:24:11.976531  371990 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-837172 NodeName:newest-cni-837172 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:24:11.976658  371990 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-837172"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:24:11.976739  371990 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1219 03:24:11.985021  371990 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:24:11.985080  371990 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:24:11.992859  371990 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1219 03:24:12.006496  371990 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1219 03:24:12.021643  371990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1219 03:24:12.034441  371990 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:24:12.038092  371990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:24:12.047986  371990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:12.128789  371990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:24:12.152988  371990 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172 for IP: 192.168.76.2
	I1219 03:24:12.153016  371990 certs.go:195] generating shared ca certs ...
	I1219 03:24:12.153035  371990 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.153175  371990 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 03:24:12.153220  371990 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 03:24:12.153233  371990 certs.go:257] generating profile certs ...
	I1219 03:24:12.153289  371990 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.key
	I1219 03:24:12.153302  371990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.crt with IP's: []
	I1219 03:24:12.271406  371990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.crt ...
	I1219 03:24:12.271435  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.crt: {Name:mke8fed86df635a05f54420e92870363146991f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.271601  371990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.key ...
	I1219 03:24:12.271612  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.key: {Name:mk39737e3f76352137132fe8060ef391a0d43bc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.271690  371990 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b
	I1219 03:24:12.271717  371990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt.48a05c1b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1219 03:24:12.379475  371990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt.48a05c1b ...
	I1219 03:24:12.379503  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt.48a05c1b: {Name:mkc4d74c8f8c4deb077c8f688d203329a2c5750d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.379662  371990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b ...
	I1219 03:24:12.379675  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b: {Name:mk1b93ad6f4ca843c3104dc76975062dde81eaef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.379761  371990 certs.go:382] copying /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt.48a05c1b -> /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt
	I1219 03:24:12.379853  371990 certs.go:386] copying /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b -> /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key
	I1219 03:24:12.379918  371990 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key
	I1219 03:24:12.379940  371990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt with IP's: []
	I1219 03:24:12.467338  371990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt ...
	I1219 03:24:12.467368  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt: {Name:mk5dc8f653da407b5f14ca799301800eac0952c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.467561  371990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key ...
	I1219 03:24:12.467581  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key: {Name:mk4063cc1af4dbf73c9c390b468c828c35385b5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.467821  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem (1338 bytes)
	W1219 03:24:12.467864  371990 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536_empty.pem, impossibly tiny 0 bytes
	I1219 03:24:12.467875  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:24:12.467901  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:24:12.467925  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:24:12.467953  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 03:24:12.468001  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:24:12.468519  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:24:12.487159  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:24:12.504306  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:24:12.521550  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 03:24:12.538418  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1219 03:24:12.554861  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:24:12.572166  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:24:12.589324  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:24:12.606224  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:24:12.625269  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem --> /usr/share/ca-certificates/8536.pem (1338 bytes)
	I1219 03:24:12.642642  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /usr/share/ca-certificates/85362.pem (1708 bytes)
	I1219 03:24:12.658965  371990 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:24:12.671458  371990 ssh_runner.go:195] Run: openssl version
	I1219 03:24:12.677537  371990 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:12.684496  371990 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:24:12.691660  371990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:12.695495  371990 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:12.695541  371990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:12.730806  371990 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:24:12.738920  371990 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 03:24:12.746295  371990 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8536.pem
	I1219 03:24:12.753462  371990 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8536.pem /etc/ssl/certs/8536.pem
	I1219 03:24:12.760758  371990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8536.pem
	I1219 03:24:12.764356  371990 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:33 /usr/share/ca-certificates/8536.pem
	I1219 03:24:12.764415  371990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8536.pem
	I1219 03:24:12.800484  371990 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:24:12.809192  371990 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8536.pem /etc/ssl/certs/51391683.0
	I1219 03:24:12.816759  371990 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85362.pem
	I1219 03:24:12.825274  371990 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85362.pem /etc/ssl/certs/85362.pem
	I1219 03:24:12.833125  371990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85362.pem
	I1219 03:24:12.836939  371990 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:33 /usr/share/ca-certificates/85362.pem
	I1219 03:24:12.836993  371990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85362.pem
	I1219 03:24:12.871891  371990 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:24:12.879672  371990 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85362.pem /etc/ssl/certs/3ec20f2e.0
	I1219 03:24:12.887040  371990 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:24:12.890648  371990 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1219 03:24:12.890729  371990 kubeadm.go:401] StartCluster: {Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:12.890825  371990 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:24:12.890893  371990 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:24:12.920058  371990 cri.go:92] found id: ""
	I1219 03:24:12.920133  371990 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:24:12.928606  371990 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 03:24:12.936934  371990 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1219 03:24:12.936985  371990 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 03:24:12.945218  371990 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 03:24:12.945240  371990 kubeadm.go:158] found existing configuration files:
	
	I1219 03:24:12.945287  371990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1219 03:24:12.952614  371990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 03:24:12.952666  371990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 03:24:12.960262  371990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1219 03:24:12.967725  371990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 03:24:12.967831  371990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 03:24:12.975015  371990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1219 03:24:12.982506  371990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 03:24:12.982549  371990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 03:24:12.989686  371990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1219 03:24:12.997834  371990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 03:24:12.997888  371990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 03:24:13.005263  371990 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1219 03:24:13.041610  371990 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1219 03:24:13.041730  371990 kubeadm.go:319] [preflight] Running pre-flight checks
	I1219 03:24:13.106822  371990 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1219 03:24:13.106921  371990 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1219 03:24:13.106982  371990 kubeadm.go:319] OS: Linux
	I1219 03:24:13.107046  371990 kubeadm.go:319] CGROUPS_CPU: enabled
	I1219 03:24:13.107146  371990 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1219 03:24:13.107237  371990 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1219 03:24:13.107288  371990 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1219 03:24:13.107344  371990 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1219 03:24:13.107385  371990 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1219 03:24:13.107463  371990 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1219 03:24:13.107538  371990 kubeadm.go:319] CGROUPS_IO: enabled
	I1219 03:24:13.164958  371990 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1219 03:24:13.165152  371990 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1219 03:24:13.165292  371990 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1219 03:24:13.174971  371990 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1219 03:24:13.178028  371990 out.go:252]   - Generating certificates and keys ...
	I1219 03:24:13.178136  371990 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1219 03:24:13.178232  371990 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1219 03:24:13.301903  371990 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1219 03:24:13.387971  371990 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1219 03:24:13.500057  371990 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1219 03:24:13.603458  371990 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1219 03:24:13.636925  371990 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1219 03:24:13.637122  371990 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-837172] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1219 03:24:13.836231  371990 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1219 03:24:13.836371  371990 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-837172] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1219 03:24:14.002346  371990 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1219 03:24:14.032095  371990 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1219 03:24:14.137234  371990 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1219 03:24:14.137362  371990 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1219 03:24:14.167788  371990 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1219 03:24:14.256296  371990 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1219 03:24:14.335846  371990 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1219 03:24:14.409462  371990 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1219 03:24:14.592839  371990 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1219 03:24:14.593412  371990 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1219 03:24:14.597164  371990 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1219 03:24:14.598823  371990 out.go:252]   - Booting up control plane ...
	I1219 03:24:14.598951  371990 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1219 03:24:14.599066  371990 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1219 03:24:14.599695  371990 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1219 03:24:14.613628  371990 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1219 03:24:14.613794  371990 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1219 03:24:14.621414  371990 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1219 03:24:14.621682  371990 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1219 03:24:14.621767  371990 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1219 03:24:14.720948  371990 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1219 03:24:14.721103  371990 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1219 03:24:15.222675  371990 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.8355ms
	I1219 03:24:15.227351  371990 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1219 03:24:15.227489  371990 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1219 03:24:15.227609  371990 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1219 03:24:15.227757  371990 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1219 03:24:16.232434  371990 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004794877s
	I1219 03:24:16.822339  371990 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.594795775s
	I1219 03:24:18.729241  371990 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501609989s
	I1219 03:24:18.747830  371990 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1219 03:24:18.757789  371990 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1219 03:24:18.768843  371990 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1219 03:24:18.769101  371990 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-837172 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1219 03:24:18.777248  371990 kubeadm.go:319] [bootstrap-token] Using token: tjh3gu.t27j0f9f7y1maup8
	I1219 03:24:18.778596  371990 out.go:252]   - Configuring RBAC rules ...
	I1219 03:24:18.778756  371990 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1219 03:24:18.782127  371990 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1219 03:24:18.788723  371990 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1219 03:24:18.791752  371990 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1219 03:24:18.794369  371990 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1219 03:24:18.796980  371990 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1219 03:24:19.135416  371990 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1219 03:24:19.551422  371990 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1219 03:24:20.135668  371990 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1219 03:24:20.136573  371990 kubeadm.go:319] 
	I1219 03:24:20.136667  371990 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1219 03:24:20.136677  371990 kubeadm.go:319] 
	I1219 03:24:20.136815  371990 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1219 03:24:20.136852  371990 kubeadm.go:319] 
	I1219 03:24:20.136883  371990 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1219 03:24:20.136970  371990 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1219 03:24:20.137020  371990 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1219 03:24:20.137026  371990 kubeadm.go:319] 
	I1219 03:24:20.137089  371990 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1219 03:24:20.137101  371990 kubeadm.go:319] 
	I1219 03:24:20.137171  371990 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1219 03:24:20.137179  371990 kubeadm.go:319] 
	I1219 03:24:20.137247  371990 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1219 03:24:20.137362  371990 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1219 03:24:20.137462  371990 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1219 03:24:20.137475  371990 kubeadm.go:319] 
	I1219 03:24:20.137594  371990 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1219 03:24:20.137725  371990 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1219 03:24:20.137741  371990 kubeadm.go:319] 
	I1219 03:24:20.137841  371990 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tjh3gu.t27j0f9f7y1maup8 \
	I1219 03:24:20.137977  371990 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e8b10e60f3db527579d1c34bb8f4e490eb8eee3e7862dee81a2c160635afa3a8 \
	I1219 03:24:20.138014  371990 kubeadm.go:319] 	--control-plane 
	I1219 03:24:20.138022  371990 kubeadm.go:319] 
	I1219 03:24:20.138116  371990 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1219 03:24:20.138124  371990 kubeadm.go:319] 
	I1219 03:24:20.138229  371990 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tjh3gu.t27j0f9f7y1maup8 \
	I1219 03:24:20.138367  371990 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e8b10e60f3db527579d1c34bb8f4e490eb8eee3e7862dee81a2c160635afa3a8 
	I1219 03:24:20.141307  371990 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1219 03:24:20.141417  371990 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1219 03:24:20.141469  371990 cni.go:84] Creating CNI manager for ""
	I1219 03:24:20.141490  371990 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:24:20.143537  371990 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1219 03:24:20.144502  371990 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1219 03:24:20.148822  371990 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl ...
	I1219 03:24:20.148843  371990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1219 03:24:20.161612  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1219 03:24:20.379173  371990 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:24:20.379262  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:20.379275  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-837172 minikube.k8s.io/updated_at=2025_12_19T03_24_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6 minikube.k8s.io/name=newest-cni-837172 minikube.k8s.io/primary=true
	I1219 03:24:20.388746  371990 ops.go:34] apiserver oom_adj: -16
	I1219 03:24:20.454762  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:20.955824  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:21.454834  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:21.954831  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:22.455563  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:22.955820  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:23.454808  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:23.955426  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:24.454807  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:24.521140  371990 kubeadm.go:1114] duration metric: took 4.141930442s to wait for elevateKubeSystemPrivileges
	I1219 03:24:24.521185  371990 kubeadm.go:403] duration metric: took 11.630460792s to StartCluster
	I1219 03:24:24.521209  371990 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:24.521280  371990 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:24:24.522690  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:24.522969  371990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1219 03:24:24.522985  371990 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:24:24.523053  371990 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:24:24.523152  371990 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-837172"
	I1219 03:24:24.523166  371990 addons.go:70] Setting default-storageclass=true in profile "newest-cni-837172"
	I1219 03:24:24.523191  371990 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-837172"
	I1219 03:24:24.523195  371990 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-837172"
	I1219 03:24:24.523231  371990 host.go:66] Checking if "newest-cni-837172" exists ...
	I1219 03:24:24.523251  371990 config.go:182] Loaded profile config "newest-cni-837172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:24.523588  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:24.523773  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:24.524387  371990 out.go:179] * Verifying Kubernetes components...
	I1219 03:24:24.525579  371990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:24.547572  371990 addons.go:239] Setting addon default-storageclass=true in "newest-cni-837172"
	I1219 03:24:24.547634  371990 host.go:66] Checking if "newest-cni-837172" exists ...
	I1219 03:24:24.547832  371990 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:24:24.548129  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:24.552104  371990 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:24:24.552127  371990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:24:24.552183  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:24.578893  371990 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:24:24.579252  371990 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:24:24.579323  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:24.583084  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:24.603726  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:24.615978  371990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1219 03:24:24.668369  371990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:24:24.704139  371990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:24:24.719590  371990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:24:24.803320  371990 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1219 03:24:24.805437  371990 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:24:24.805497  371990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:24:25.029229  371990 api_server.go:72] duration metric: took 506.215716ms to wait for apiserver process to appear ...
	I1219 03:24:25.029261  371990 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:24:25.029282  371990 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:24:25.034829  371990 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1219 03:24:25.035777  371990 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 03:24:25.035813  371990 api_server.go:131] duration metric: took 6.544499ms to wait for apiserver health ...
	I1219 03:24:25.035828  371990 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:24:25.038607  371990 system_pods.go:59] 8 kube-system pods found
	I1219 03:24:25.038639  371990 system_pods.go:61] "coredns-7d764666f9-ckc9j" [5bc3e758-2623-4eae-87fe-a58b932c9e87] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1219 03:24:25.038651  371990 system_pods.go:61] "etcd-newest-cni-837172" [59f28fae-3605-487b-a1b8-c3851c47abac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:24:25.038659  371990 system_pods.go:61] "kindnet-846n4" [b45c7fbd-085c-4972-b312-0973aab68ddc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:24:25.038670  371990 system_pods.go:61] "kube-apiserver-newest-cni-837172" [8d92900e-716d-42ad-9d88-1ca6d0ddf5c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:24:25.038678  371990 system_pods.go:61] "kube-controller-manager-newest-cni-837172" [46b3ad5a-64d1-4e1f-8bdf-ce613dcd6348] Running
	I1219 03:24:25.038684  371990 system_pods.go:61] "kube-proxy-6wg2n" [356cd689-df37-49ac-a3f2-1931978ccf64] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:24:25.038690  371990 system_pods.go:61] "kube-scheduler-newest-cni-837172" [da065d09-cc65-42e7-8e0d-9f9709cafaf9] Running
	I1219 03:24:25.038695  371990 system_pods.go:61] "storage-provisioner" [ba402c27-5828-489f-a656-bc0ef2e8f05e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1219 03:24:25.038713  371990 system_pods.go:74] duration metric: took 2.880877ms to wait for pod list to return data ...
	I1219 03:24:25.038720  371990 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:24:25.038969  371990 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1219 03:24:25.040226  371990 addons.go:546] duration metric: took 517.179033ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1219 03:24:25.040990  371990 default_sa.go:45] found service account: "default"
	I1219 03:24:25.041006  371990 default_sa.go:55] duration metric: took 2.27792ms for default service account to be created ...
	I1219 03:24:25.041015  371990 kubeadm.go:587] duration metric: took 518.007856ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:24:25.041030  371990 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:24:25.043438  371990 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:24:25.043465  371990 node_conditions.go:123] node cpu capacity is 8
	I1219 03:24:25.043494  371990 node_conditions.go:105] duration metric: took 2.45952ms to run NodePressure ...
	I1219 03:24:25.043503  371990 start.go:242] waiting for startup goroutines ...
	I1219 03:24:25.308179  371990 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-837172" context rescaled to 1 replicas
	I1219 03:24:25.308227  371990 start.go:247] waiting for cluster config update ...
	I1219 03:24:25.308241  371990 start.go:256] writing updated cluster config ...
	I1219 03:24:25.308502  371990 ssh_runner.go:195] Run: rm -f paused
	I1219 03:24:25.358553  371990 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1219 03:24:25.360429  371990 out.go:179] * Done! kubectl is now configured to use "newest-cni-837172" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.105809849Z" level=info msg="Created container 35d02beeb2185abe39e28a11af6121432270e8cbb693e170c7e4268ee5b3b270: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq/clear-stale-pid" id=5a645826-349a-438a-8096-df1ef85fa13f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.106574675Z" level=info msg="Starting container: 35d02beeb2185abe39e28a11af6121432270e8cbb693e170c7e4268ee5b3b270" id=cac0719a-4055-47b5-9cd1-db557fa3cd0a name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.108867589Z" level=info msg="Started container" PID=1966 containerID=35d02beeb2185abe39e28a11af6121432270e8cbb693e170c7e4268ee5b3b270 description=kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq/clear-stale-pid id=cac0719a-4055-47b5-9cd1-db557fa3cd0a name=/runtime.v1.RuntimeService/StartContainer sandboxID=8667e85e98055a9a632295b2700a5f7426d8a381a2d0a88f63c9f25413fcbb91
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.244843017Z" level=info msg="Checking image status: kong:3.9" id=0cec8e99-8e10-454e-875b-ea15d4a209cd name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.245030729Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.247083766Z" level=info msg="Checking image status: kong:3.9" id=3f2254f1-a52b-4104-87c2-661e1bd23ec3 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.247306541Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.25336671Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq/proxy" id=94e64d89-1339-4146-8f9c-53f734cb5589 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.253525887Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.260510197Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.261326368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.301363315Z" level=info msg="Created container dd2d524ddac2329c06c87daf6c7c8f6eb8a85bde1ae5d599415346c87bfae650: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq/proxy" id=94e64d89-1339-4146-8f9c-53f734cb5589 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.30215616Z" level=info msg="Starting container: dd2d524ddac2329c06c87daf6c7c8f6eb8a85bde1ae5d599415346c87bfae650" id=12ee788e-8289-4593-8fc9-302f77b8987b name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.304379149Z" level=info msg="Started container" PID=1977 containerID=dd2d524ddac2329c06c87daf6c7c8f6eb8a85bde1ae5d599415346c87bfae650 description=kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq/proxy id=12ee788e-8289-4593-8fc9-302f77b8987b name=/runtime.v1.RuntimeService/StartContainer sandboxID=8667e85e98055a9a632295b2700a5f7426d8a381a2d0a88f63c9f25413fcbb91
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.293364694Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7a2b6641-2330-4f1c-8ac3-bd5fc486ac9a name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.294343816Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=25406107-20f3-4be8-a6d5-7899eb74be0f name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.295572666Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ee62b1e2-d30f-4a02-8252-3cb5cb6d1371 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.295760296Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.302496713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.302683962Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/79e095d41c86d78a8608ae66d21f2385ceb6046ec971057f76b5fefa76ae5f30/merged/etc/passwd: no such file or directory"
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.302750865Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/79e095d41c86d78a8608ae66d21f2385ceb6046ec971057f76b5fefa76ae5f30/merged/etc/group: no such file or directory"
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.303093477Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.338341513Z" level=info msg="Created container d997c9b36079f5a7a989c5b8afff9f0cb04f9dfcbd3977915ee11b3d32ada9a6: kube-system/storage-provisioner/storage-provisioner" id=ee62b1e2-d30f-4a02-8252-3cb5cb6d1371 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.339046763Z" level=info msg="Starting container: d997c9b36079f5a7a989c5b8afff9f0cb04f9dfcbd3977915ee11b3d32ada9a6" id=43566eeb-3dc8-4778-bebc-e3a50bf31b5d name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.341081965Z" level=info msg="Started container" PID=3395 containerID=d997c9b36079f5a7a989c5b8afff9f0cb04f9dfcbd3977915ee11b3d32ada9a6 description=kube-system/storage-provisioner/storage-provisioner id=43566eeb-3dc8-4778-bebc-e3a50bf31b5d name=/runtime.v1.RuntimeService/StartContainer sandboxID=470b7f13281e4c61793ea7eeab1f00af8c464b75a182af8abe8a9e8fcfc00b9a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	d997c9b36079f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Running             storage-provisioner                    1                   470b7f13281e4       storage-provisioner                                     kube-system
	dd2d524ddac23       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                           18 minutes ago      Running             proxy                                  0                   8667e85e98055       kubernetes-dashboard-kong-9849c64bd-jnmzq               kubernetes-dashboard
	35d02beeb2185       docker.io/library/kong@sha256:73ac10ce4d2c5b3b8b4acd6c8117b4e72d1a201d95be2d51aeae8324d776a108                             18 minutes ago      Exited              clear-stale-pid                        0                   8667e85e98055       kubernetes-dashboard-kong-9849c64bd-jnmzq               kubernetes-dashboard
	5fe7d916a364f       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   18 minutes ago      Running             kubernetes-dashboard-metrics-scraper   0                   8df1f8a8e9b8c       kubernetes-dashboard-metrics-scraper-7685fd8b77-pwmcj   kubernetes-dashboard
	efed0d8824978       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               18 minutes ago      Running             kubernetes-dashboard-web               0                   85c0932639a7f       kubernetes-dashboard-web-5c9f966b98-pmb5t               kubernetes-dashboard
	6e3eff743b9cd       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              18 minutes ago      Running             kubernetes-dashboard-auth              0                   226a7334560d4       kubernetes-dashboard-auth-76bb77b695-58swx              kubernetes-dashboard
	5c21853c28563       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               18 minutes ago      Running             kubernetes-dashboard-api               0                   442cfc6f80155       kubernetes-dashboard-api-6c4454678d-vmnj2               kubernetes-dashboard
	561ec43405227       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                                           18 minutes ago      Running             busybox                                1                   bdce9bd9d632c       busybox                                                 default
	2592b062e7872       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                           18 minutes ago      Running             coredns                                0                   ad0fcb07810bf       coredns-66bc5c9577-dskxl                                kube-system
	dbbb6a255de37       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Exited              storage-provisioner                    0                   470b7f13281e4       storage-provisioner                                     kube-system
	d7b31f6039b4c       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                                           18 minutes ago      Running             kindnet-cni                            0                   42aa8ce5cba75       kindnet-zgcrn                                           kube-system
	cd178b86eed6d       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                           18 minutes ago      Running             kube-proxy                             0                   84cdb0361e2e6       kube-proxy-mr7c8                                        kube-system
	1340a2f59347d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                           18 minutes ago      Running             etcd                                   0                   ccb6ae903ae17       etcd-default-k8s-diff-port-717222                       kube-system
	725faee3812c5       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                           18 minutes ago      Running             kube-scheduler                         0                   2ad392cb5e514       kube-scheduler-default-k8s-diff-port-717222             kube-system
	d2c496c53c696       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                           18 minutes ago      Running             kube-apiserver                         0                   ec833bb6abd84       kube-apiserver-default-k8s-diff-port-717222             kube-system
	0fb4e8910a64f       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                           18 minutes ago      Running             kube-controller-manager                0                   6217f80d4b77a       kube-controller-manager-default-k8s-diff-port-717222    kube-system
	
	
	==> coredns [2592b062e787245c17fcfad40e551290657aea425be5e044174243d7524bc317] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55129 - 16165 "HINFO IN 3453254911344364497.3052208195299777284. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.04385742s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-717222
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-717222
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=default-k8s-diff-port-717222
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_05_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:05:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-717222
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:24:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:24:35 +0000   Fri, 19 Dec 2025 03:04:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:24:35 +0000   Fri, 19 Dec 2025 03:04:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:24:35 +0000   Fri, 19 Dec 2025 03:04:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:24:35 +0000   Fri, 19 Dec 2025 03:05:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-717222
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                301b16dc-31c1-4466-a363-b4e4f9941cd5
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-dskxl                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-default-k8s-diff-port-717222                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-zgcrn                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-default-k8s-diff-port-717222              250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-717222     200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-mr7c8                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-default-k8s-diff-port-717222              100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        kubernetes-dashboard-api-6c4454678d-vmnj2                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-auth-76bb77b695-58swx               100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-jnmzq                0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-pwmcj    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-pmb5t                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1250m (15%)  1100m (13%)
	  memory             1020Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     19m                kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node default-k8s-diff-port-717222 event: Registered Node default-k8s-diff-port-717222 in Controller
	  Normal  NodeReady                19m                kubelet          Node default-k8s-diff-port-717222 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node default-k8s-diff-port-717222 event: Registered Node default-k8s-diff-port-717222 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [1340a2f59347d94cb67897c14b11276bb491943caa3b94ed7080db25e5d5dc78] <==
	{"level":"warn","ts":"2025-12-19T03:06:06.378732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.407834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.459907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.484810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.498580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.516121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.532033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.548224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.567442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.583249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.608694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.623918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49614","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T03:16:01.657050Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":990}
	{"level":"info","ts":"2025-12-19T03:16:01.664840Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":990,"took":"7.456943ms","hash":2471762061,"current-db-size-bytes":3915776,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":3915776,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2025-12-19T03:16:01.664919Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2471762061,"revision":990,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:21:01.662543Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1235}
	{"level":"info","ts":"2025-12-19T03:21:01.664987Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1235,"took":"2.122855ms","hash":87961367,"current-db-size-bytes":3915776,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":2101248,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-12-19T03:21:01.665040Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":87961367,"revision":1235,"compact-revision":990}
	{"level":"info","ts":"2025-12-19T03:24:04.616130Z","caller":"traceutil/trace.go:172","msg":"trace[305349054] transaction","detail":"{read_only:false; response_revision:1632; number_of_response:1; }","duration":"142.323928ms","start":"2025-12-19T03:24:04.473787Z","end":"2025-12-19T03:24:04.616111Z","steps":["trace[305349054] 'process raft request'  (duration: 125.885135ms)","trace[305349054] 'compare'  (duration: 16.341168ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:24:04.788967Z","caller":"traceutil/trace.go:172","msg":"trace[736017107] linearizableReadLoop","detail":"{readStateIndex:1877; appliedIndex:1877; }","duration":"171.212825ms","start":"2025-12-19T03:24:04.617731Z","end":"2025-12-19T03:24:04.788944Z","steps":["trace[736017107] 'read index received'  (duration: 171.200561ms)","trace[736017107] 'applied index is now lower than readState.Index'  (duration: 11.453µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:24:04.789391Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"171.646552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" limit:1 ","response":"range_response_count:1 size:420"}
	{"level":"info","ts":"2025-12-19T03:24:04.789481Z","caller":"traceutil/trace.go:172","msg":"trace[1502582395] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:1632; }","duration":"171.772291ms","start":"2025-12-19T03:24:04.617692Z","end":"2025-12-19T03:24:04.789464Z","steps":["trace[1502582395] 'agreement among raft nodes before linearized reading'  (duration: 171.370456ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:24:04.789526Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.513258ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configuration.konghq.com/konglicenses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:24:04.789556Z","caller":"traceutil/trace.go:172","msg":"trace[331089237] range","detail":"{range_begin:/registry/configuration.konghq.com/konglicenses; range_end:; response_count:0; response_revision:1633; }","duration":"107.546037ms","start":"2025-12-19T03:24:04.682002Z","end":"2025-12-19T03:24:04.789548Z","steps":["trace[331089237] 'agreement among raft nodes before linearized reading'  (duration: 107.495727ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:24:04.789623Z","caller":"traceutil/trace.go:172","msg":"trace[872270148] transaction","detail":"{read_only:false; response_revision:1633; number_of_response:1; }","duration":"218.608589ms","start":"2025-12-19T03:24:04.570993Z","end":"2025-12-19T03:24:04.789601Z","steps":["trace[872270148] 'process raft request'  (duration: 218.073061ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:24:43 up  1:07,  0 user,  load average: 1.94, 0.92, 1.24
	Linux default-k8s-diff-port-717222 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d7b31f6039b4c71a1c774e7e89359f49dd4bca0b72f47cce0a7db10b8a4eb339] <==
	I1219 03:22:34.146122       1 main.go:301] handling current node
	I1219 03:22:44.143430       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:22:44.143465       1 main.go:301] handling current node
	I1219 03:22:54.151094       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:22:54.151132       1 main.go:301] handling current node
	I1219 03:23:04.143666       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:23:04.143740       1 main.go:301] handling current node
	I1219 03:23:14.144759       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:23:14.144798       1 main.go:301] handling current node
	I1219 03:23:24.151061       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:23:24.151091       1 main.go:301] handling current node
	I1219 03:23:34.143600       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:23:34.143635       1 main.go:301] handling current node
	I1219 03:23:44.142928       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:23:44.142982       1 main.go:301] handling current node
	I1219 03:23:54.143360       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:23:54.143398       1 main.go:301] handling current node
	I1219 03:24:04.151416       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:24:04.151454       1 main.go:301] handling current node
	I1219 03:24:14.147032       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:24:14.147066       1 main.go:301] handling current node
	I1219 03:24:24.143281       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:24:24.143313       1 main.go:301] handling current node
	I1219 03:24:34.148458       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:24:34.148865       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d2c496c53c69616e93f161ca59c69ebaa3d4de81e94460893270a29e321886eb] <==
	I1219 03:06:06.068365       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:06:06.073897       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:06:06.084961       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.107.87.247"}
	I1219 03:06:06.089336       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.220.200"}
	I1219 03:06:06.096055       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.107.37.89"}
	I1219 03:06:06.097724       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.106.126.95"}
	I1219 03:06:06.105426       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.103.60.201"}
	I1219 03:06:06.111150       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 03:06:06.111150       1 controller.go:667] quota admission added evaluator for: deployments.apps
	W1219 03:06:06.366398       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.407675       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.460136       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.484666       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.498913       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.516026       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.532002       1 logging.go:55] [core] [Channel #283 SubChannel #284]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.548159       1 logging.go:55] [core] [Channel #287 SubChannel #288]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.564547       1 logging.go:55] [core] [Channel #291 SubChannel #292]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 03:06:06.583215       1 logging.go:55] [core] [Channel #295 SubChannel #296]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1219 03:06:06.599243       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	W1219 03:06:06.606221       1 logging.go:55] [core] [Channel #299 SubChannel #300]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.623365       1 logging.go:55] [core] [Channel #303 SubChannel #304]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1219 03:06:06.946827       1 controller.go:667] quota admission added evaluator for: endpoints
	I1219 03:06:07.061226       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:16:03.036227       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [0fb4e8910a64fc9fe34f2319ccb58566695768a8c137e01b391e3004481cb992] <==
	I1219 03:06:06.443886       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 03:06:06.448122       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1219 03:06:06.448186       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1219 03:06:06.448203       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1219 03:06:06.448213       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1219 03:06:06.465415       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1219 03:06:06.465574       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1219 03:06:06.465610       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1219 03:06:06.465621       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1219 03:06:06.465629       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1219 03:06:06.469733       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1219 03:06:06.472102       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1219 03:06:06.475316       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1219 03:06:06.478047       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 03:06:06.492013       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1219 03:06:06.492117       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1219 03:06:06.492629       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1219 03:06:06.493189       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1219 03:06:06.493873       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 03:06:07.594172       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:06:07.650019       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:06:07.681489       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:06:07.691740       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:06:07.691828       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 03:06:07.691843       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [cd178b86eed6df4e301822d1cb033cde8457245acc5c1565f60ccb12d47ee2aa] <==
	I1219 03:06:03.629338       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:06:03.701880       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:06:03.802296       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:06:03.802339       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1219 03:06:03.802448       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:06:03.830859       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:06:03.830933       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:06:03.839110       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:06:03.840168       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:06:03.840214       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:06:03.842696       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:06:03.842727       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:06:03.842694       1 config.go:309] "Starting node config controller"
	I1219 03:06:03.842762       1 config.go:200] "Starting service config controller"
	I1219 03:06:03.842769       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:06:03.842768       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:06:03.842972       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:06:03.843007       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:06:03.942900       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:06:03.942899       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:06:03.942907       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:06:03.943205       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [725faee3812c5d3136b764571b2c9b270517680beb590d21de31aad0b5d0b89a] <==
	I1219 03:06:01.472873       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:06:03.026871       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:06:03.026986       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1219 03:06:03.027002       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:06:03.027011       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:06:03.089314       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:06:03.089358       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:06:03.093055       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:06:03.093084       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:06:03.093364       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:06:03.094336       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:06:03.193871       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067763     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kong-custom-dbless-config-volume\" (UniqueName: \"kubernetes.io/configmap/3331ddda-eb3e-4cee-bfd1-ec7b71a257e7-kong-custom-dbless-config-volume\") pod \"kubernetes-dashboard-kong-9849c64bd-jnmzq\" (UID: \"3331ddda-eb3e-4cee-bfd1-ec7b71a257e7\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067795     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5wfw\" (UniqueName: \"kubernetes.io/projected/b73c0a8a-5571-4af3-bd91-2ce3ab1f7b8e-kube-api-access-f5wfw\") pod \"kubernetes-dashboard-api-6c4454678d-vmnj2\" (UID: \"b73c0a8a-5571-4af3-bd91-2ce3ab1f7b8e\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-6c4454678d-vmnj2"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067823     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-prefix-dir\" (UniqueName: \"kubernetes.io/empty-dir/3331ddda-eb3e-4cee-bfd1-ec7b71a257e7-kubernetes-dashboard-kong-prefix-dir\") pod \"kubernetes-dashboard-kong-9849c64bd-jnmzq\" (UID: \"3331ddda-eb3e-4cee-bfd1-ec7b71a257e7\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067847     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-tmp\" (UniqueName: \"kubernetes.io/empty-dir/3331ddda-eb3e-4cee-bfd1-ec7b71a257e7-kubernetes-dashboard-kong-tmp\") pod \"kubernetes-dashboard-kong-9849c64bd-jnmzq\" (UID: \"3331ddda-eb3e-4cee-bfd1-ec7b71a257e7\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067872     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lhhh\" (UniqueName: \"kubernetes.io/projected/af7e569e-9279-40a6-aa17-cda231d867a2-kube-api-access-4lhhh\") pod \"kubernetes-dashboard-web-5c9f966b98-pmb5t\" (UID: \"af7e569e-9279-40a6-aa17-cda231d867a2\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-pmb5t"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067900     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmswx\" (UniqueName: \"kubernetes.io/projected/24aef03d-85db-4df3-a193-f13c807f84de-kube-api-access-bmswx\") pod \"kubernetes-dashboard-auth-76bb77b695-58swx\" (UID: \"24aef03d-85db-4df3-a193-f13c807f84de\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-76bb77b695-58swx"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067924     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b73c0a8a-5571-4af3-bd91-2ce3ab1f7b8e-tmp-volume\") pod \"kubernetes-dashboard-api-6c4454678d-vmnj2\" (UID: \"b73c0a8a-5571-4af3-bd91-2ce3ab1f7b8e\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-6c4454678d-vmnj2"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067959     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/af7e569e-9279-40a6-aa17-cda231d867a2-tmp-volume\") pod \"kubernetes-dashboard-web-5c9f966b98-pmb5t\" (UID: \"af7e569e-9279-40a6-aa17-cda231d867a2\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-pmb5t"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.068002     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/24aef03d-85db-4df3-a193-f13c807f84de-tmp-volume\") pod \"kubernetes-dashboard-auth-76bb77b695-58swx\" (UID: \"24aef03d-85db-4df3-a193-f13c807f84de\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-76bb77b695-58swx"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.068024     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9f54900a-1ad0-4593-8236-0a1dc1a88e64-tmp-volume\") pod \"kubernetes-dashboard-metrics-scraper-7685fd8b77-pwmcj\" (UID: \"9f54900a-1ad0-4593-8236-0a1dc1a88e64\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-pwmcj"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.110436     727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 19 03:06:08 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:08.735645     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:08 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:08.735776     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:09 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:09.227142     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-api-6c4454678d-vmnj2" podStartSLOduration=0.849461056 podStartE2EDuration="2.227114712s" podCreationTimestamp="2025-12-19 03:06:07 +0000 UTC" firstStartedPulling="2025-12-19 03:06:07.357732164 +0000 UTC m=+7.304652030" lastFinishedPulling="2025-12-19 03:06:08.735385823 +0000 UTC m=+8.682305686" observedRunningTime="2025-12-19 03:06:09.226299035 +0000 UTC m=+9.173218910" watchObservedRunningTime="2025-12-19 03:06:09.227114712 +0000 UTC m=+9.174034588"
	Dec 19 03:06:10 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:10.419464     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:10 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:10.419559     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:11 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:11.234033     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-auth-76bb77b695-58swx" podStartSLOduration=1.191233274 podStartE2EDuration="4.234006036s" podCreationTimestamp="2025-12-19 03:06:07 +0000 UTC" firstStartedPulling="2025-12-19 03:06:07.376415045 +0000 UTC m=+7.323334914" lastFinishedPulling="2025-12-19 03:06:10.419187817 +0000 UTC m=+10.366107676" observedRunningTime="2025-12-19 03:06:11.233777792 +0000 UTC m=+11.180697668" watchObservedRunningTime="2025-12-19 03:06:11.234006036 +0000 UTC m=+11.180925911"
	Dec 19 03:06:13 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:13.311379     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:13 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:13.311529     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:14 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:14.115193     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:14 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:14.115296     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:14 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:14.241972     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-pwmcj" podStartSLOduration=0.508150908 podStartE2EDuration="7.241948013s" podCreationTimestamp="2025-12-19 03:06:07 +0000 UTC" firstStartedPulling="2025-12-19 03:06:07.38113198 +0000 UTC m=+7.328051833" lastFinishedPulling="2025-12-19 03:06:14.11492908 +0000 UTC m=+14.061848938" observedRunningTime="2025-12-19 03:06:14.24166888 +0000 UTC m=+14.188588771" watchObservedRunningTime="2025-12-19 03:06:14.241948013 +0000 UTC m=+14.188867888"
	Dec 19 03:06:14 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:14.255081     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-pmb5t" podStartSLOduration=1.322160186 podStartE2EDuration="7.255055586s" podCreationTimestamp="2025-12-19 03:06:07 +0000 UTC" firstStartedPulling="2025-12-19 03:06:07.378248795 +0000 UTC m=+7.325168663" lastFinishedPulling="2025-12-19 03:06:13.311144187 +0000 UTC m=+13.258064063" observedRunningTime="2025-12-19 03:06:14.254652221 +0000 UTC m=+14.201572121" watchObservedRunningTime="2025-12-19 03:06:14.255055586 +0000 UTC m=+14.201975462"
	Dec 19 03:06:19 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:19.265507     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq" podStartSLOduration=1.591075171 podStartE2EDuration="12.26547879s" podCreationTimestamp="2025-12-19 03:06:07 +0000 UTC" firstStartedPulling="2025-12-19 03:06:07.391768736 +0000 UTC m=+7.338688592" lastFinishedPulling="2025-12-19 03:06:18.066172352 +0000 UTC m=+18.013092211" observedRunningTime="2025-12-19 03:06:19.265420913 +0000 UTC m=+19.212340789" watchObservedRunningTime="2025-12-19 03:06:19.26547879 +0000 UTC m=+19.212398667"
	Dec 19 03:06:34 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:34.292974     727 scope.go:117] "RemoveContainer" containerID="dbbb6a255de373670b1e2cdd1093e839a1db9c2f1eff3fcb467e13bda9db456d"
	
	
	==> kubernetes-dashboard [5c21853c28563a691ef440986410f18c67ba23dbc122b1d94b9cce6075bdfb75] <==
	I1219 03:06:08.860787       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:06:08.860900       1 init.go:49] Using in-cluster config
	I1219 03:06:08.861145       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:06:08.861164       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:06:08.861172       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:06:08.861177       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:06:08.868063       1 main.go:119] "Successful initial request to the apiserver" version="v1.34.3"
	I1219 03:06:08.868091       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:06:08.944605       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 03:06:08.948604       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	I1219 03:06:38.953964       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [5fe7d916a364f331d8aa2665bfdbeab1fff27316fa0fee64cb7834c35bef418d] <==
	10.244.0.1 - - [19/Dec/2025:03:22:09 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:12 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:22 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:32 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:39 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:42 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:52 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:02 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:09 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:12 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:22 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:32 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:39 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:42 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:52 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:02 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:09 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:24:12 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:22 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:32 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:39 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:24:42 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	E1219 03:22:14.229684       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:23:14.229789       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:24:14.230171       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	
	
	==> kubernetes-dashboard [6e3eff743b9cdb70ef6cbf70a1039d5cff4c8fe2e48d5a15acb23261f2b4507e] <==
	I1219 03:06:10.539923       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:06:10.540000       1 init.go:49] Using in-cluster config
	I1219 03:06:10.540134       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [efed0d882497800414676940b84aa41e026026efe618a2d160430de527d8e1f6] <==
	I1219 03:06:13.510889       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:06:13.510946       1 init.go:48] Using in-cluster config
	I1219 03:06:13.511172       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [d997c9b36079f5a7a989c5b8afff9f0cb04f9dfcbd3977915ee11b3d32ada9a6] <==
	W1219 03:24:17.886511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:19.890290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:19.894689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:21.898289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:21.902374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:23.906060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:23.911319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:25.915018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:25.919611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:27.923189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:27.928429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:29.932430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:29.936267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:31.939085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:31.946026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:33.949633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:33.953689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:35.956529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:35.961490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:37.964279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:37.968416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:39.971880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:39.976073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:41.979852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:41.987361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dbbb6a255de373670b1e2cdd1093e839a1db9c2f1eff3fcb467e13bda9db456d] <==
	I1219 03:06:03.592106       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:06:33.595312       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-717222 -n default-k8s-diff-port-717222
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-717222 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-433330 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-433330 --alsologtostderr -v=1: exit status 80 (2.334521621s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-433330 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 03:23:51.113145  367389 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:23:51.113256  367389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:23:51.113265  367389 out.go:374] Setting ErrFile to fd 2...
	I1219 03:23:51.113270  367389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:23:51.113497  367389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:23:51.113757  367389 out.go:368] Setting JSON to false
	I1219 03:23:51.113776  367389 mustload.go:66] Loading cluster: old-k8s-version-433330
	I1219 03:23:51.114134  367389 config.go:182] Loaded profile config "old-k8s-version-433330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1219 03:23:51.114527  367389 cli_runner.go:164] Run: docker container inspect old-k8s-version-433330 --format={{.State.Status}}
	I1219 03:23:51.133086  367389 host.go:66] Checking if "old-k8s-version-433330" exists ...
	I1219 03:23:51.133351  367389 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:23:51.190801  367389 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-19 03:23:51.180476828 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:23:51.191522  367389 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765965980-22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765965980-22186-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-433330 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1219 03:23:51.193619  367389 out.go:179] * Pausing node old-k8s-version-433330 ... 
	I1219 03:23:51.195098  367389 host.go:66] Checking if "old-k8s-version-433330" exists ...
	I1219 03:23:51.195347  367389 ssh_runner.go:195] Run: systemctl --version
	I1219 03:23:51.195390  367389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-433330
	I1219 03:23:51.215358  367389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/old-k8s-version-433330/id_rsa Username:docker}
	I1219 03:23:51.316649  367389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:23:51.329440  367389 pause.go:52] kubelet running: true
	I1219 03:23:51.329521  367389 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1219 03:23:51.517521  367389 cri.go:57] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1219 03:23:51.517603  367389 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1219 03:23:51.588620  367389 cri.go:92] found id: "b58c35740f2bd58f8c06629e5842dc5bfb5070615f14eabac5641c2021fc5622"
	I1219 03:23:51.588643  367389 cri.go:92] found id: "8040658b9f3eabbbb5ba47f09aef99df774aaa0bfb4e998b1fb75e4a9d80669f"
	I1219 03:23:51.588647  367389 cri.go:92] found id: "9243551aa2fc190ba88910a889833487b7b7643a56fe7be4cf6f3b57fe4c6fb8"
	I1219 03:23:51.588651  367389 cri.go:92] found id: "9a529209e91c73dd4753533a0684e92d16e448ef5aed74f450f82867ec17304c"
	I1219 03:23:51.588654  367389 cri.go:92] found id: "4a2a86182d6e0257a9f9f666cddbee5bfd7ae7855be029a79247541fbec34a4d"
	I1219 03:23:51.588663  367389 cri.go:92] found id: "ba54120ef227f793b4872b17bf830934ba52ead424688a987fd2abd98fa2cc0e"
	I1219 03:23:51.588666  367389 cri.go:92] found id: "dca7ec4a11ad9581ca73f7dc0a44569b981b375142761b6af7e7d3724cdbe386"
	I1219 03:23:51.588669  367389 cri.go:92] found id: "6764bc2ee8b6d353f059a22bdc0c53e92e10b6672c29a9f618bd8d9812741d3b"
	I1219 03:23:51.588672  367389 cri.go:92] found id: "e80d5d62bfdccfdfe3b1ea4a0e9ada6c25b66f5f5a6a80697856ea062adbe100"
	I1219 03:23:51.588677  367389 cri.go:92] found id: "9757437ad1c1d294af43014db84d4705543c3ebf727fe8d8aa2bcc64dab4ebe2"
	I1219 03:23:51.588680  367389 cri.go:92] found id: "1a79f7aa9ddca8538f5c2e6b027ef80376da81a39357bcf46549ad8c98b71cf4"
	I1219 03:23:51.588693  367389 cri.go:92] found id: "43a7239d34381053665a57780cf9a1e8bf2693f67351d58a11eb8f0a2008e906"
	I1219 03:23:51.588696  367389 cri.go:92] found id: "c787e566a13574b826852efdc66cad3575fec6831fdc3a03d85531ffb5a730c9"
	I1219 03:23:51.588735  367389 cri.go:92] found id: "572a9a98a5b172d9fdd8fb35ee68c37988f906c342e70e89e9bc008440b9c471"
	I1219 03:23:51.588744  367389 cri.go:92] found id: "162ae6553f9ecd77ec53fa67c114215b76b2baa657cb81e65ce4554f0273417d"
	I1219 03:23:51.588752  367389 cri.go:92] found id: ""
	I1219 03:23:51.588805  367389 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 03:23:51.601038  367389 retry.go:31] will retry after 367.903416ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:23:51Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:23:51.969625  367389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:23:51.983075  367389 pause.go:52] kubelet running: false
	I1219 03:23:51.983142  367389 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1219 03:23:52.145573  367389 cri.go:57] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1219 03:23:52.145653  367389 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1219 03:23:52.216920  367389 cri.go:92] found id: "b58c35740f2bd58f8c06629e5842dc5bfb5070615f14eabac5641c2021fc5622"
	I1219 03:23:52.216943  367389 cri.go:92] found id: "8040658b9f3eabbbb5ba47f09aef99df774aaa0bfb4e998b1fb75e4a9d80669f"
	I1219 03:23:52.216947  367389 cri.go:92] found id: "9243551aa2fc190ba88910a889833487b7b7643a56fe7be4cf6f3b57fe4c6fb8"
	I1219 03:23:52.216950  367389 cri.go:92] found id: "9a529209e91c73dd4753533a0684e92d16e448ef5aed74f450f82867ec17304c"
	I1219 03:23:52.216953  367389 cri.go:92] found id: "4a2a86182d6e0257a9f9f666cddbee5bfd7ae7855be029a79247541fbec34a4d"
	I1219 03:23:52.216959  367389 cri.go:92] found id: "ba54120ef227f793b4872b17bf830934ba52ead424688a987fd2abd98fa2cc0e"
	I1219 03:23:52.216962  367389 cri.go:92] found id: "dca7ec4a11ad9581ca73f7dc0a44569b981b375142761b6af7e7d3724cdbe386"
	I1219 03:23:52.216965  367389 cri.go:92] found id: "6764bc2ee8b6d353f059a22bdc0c53e92e10b6672c29a9f618bd8d9812741d3b"
	I1219 03:23:52.216968  367389 cri.go:92] found id: "e80d5d62bfdccfdfe3b1ea4a0e9ada6c25b66f5f5a6a80697856ea062adbe100"
	I1219 03:23:52.216983  367389 cri.go:92] found id: "9757437ad1c1d294af43014db84d4705543c3ebf727fe8d8aa2bcc64dab4ebe2"
	I1219 03:23:52.216990  367389 cri.go:92] found id: "1a79f7aa9ddca8538f5c2e6b027ef80376da81a39357bcf46549ad8c98b71cf4"
	I1219 03:23:52.216994  367389 cri.go:92] found id: "43a7239d34381053665a57780cf9a1e8bf2693f67351d58a11eb8f0a2008e906"
	I1219 03:23:52.216999  367389 cri.go:92] found id: "c787e566a13574b826852efdc66cad3575fec6831fdc3a03d85531ffb5a730c9"
	I1219 03:23:52.217003  367389 cri.go:92] found id: "572a9a98a5b172d9fdd8fb35ee68c37988f906c342e70e89e9bc008440b9c471"
	I1219 03:23:52.217007  367389 cri.go:92] found id: "162ae6553f9ecd77ec53fa67c114215b76b2baa657cb81e65ce4554f0273417d"
	I1219 03:23:52.217013  367389 cri.go:92] found id: ""
	I1219 03:23:52.217055  367389 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 03:23:52.229323  367389 retry.go:31] will retry after 246.066324ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:23:52Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:23:52.475762  367389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:23:52.490856  367389 pause.go:52] kubelet running: false
	I1219 03:23:52.490917  367389 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1219 03:23:52.687759  367389 cri.go:57] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1219 03:23:52.687833  367389 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1219 03:23:52.767847  367389 cri.go:92] found id: "b58c35740f2bd58f8c06629e5842dc5bfb5070615f14eabac5641c2021fc5622"
	I1219 03:23:52.767872  367389 cri.go:92] found id: "8040658b9f3eabbbb5ba47f09aef99df774aaa0bfb4e998b1fb75e4a9d80669f"
	I1219 03:23:52.767877  367389 cri.go:92] found id: "9243551aa2fc190ba88910a889833487b7b7643a56fe7be4cf6f3b57fe4c6fb8"
	I1219 03:23:52.767880  367389 cri.go:92] found id: "9a529209e91c73dd4753533a0684e92d16e448ef5aed74f450f82867ec17304c"
	I1219 03:23:52.767890  367389 cri.go:92] found id: "4a2a86182d6e0257a9f9f666cddbee5bfd7ae7855be029a79247541fbec34a4d"
	I1219 03:23:52.767895  367389 cri.go:92] found id: "ba54120ef227f793b4872b17bf830934ba52ead424688a987fd2abd98fa2cc0e"
	I1219 03:23:52.767900  367389 cri.go:92] found id: "dca7ec4a11ad9581ca73f7dc0a44569b981b375142761b6af7e7d3724cdbe386"
	I1219 03:23:52.767905  367389 cri.go:92] found id: "6764bc2ee8b6d353f059a22bdc0c53e92e10b6672c29a9f618bd8d9812741d3b"
	I1219 03:23:52.767909  367389 cri.go:92] found id: "e80d5d62bfdccfdfe3b1ea4a0e9ada6c25b66f5f5a6a80697856ea062adbe100"
	I1219 03:23:52.767917  367389 cri.go:92] found id: "9757437ad1c1d294af43014db84d4705543c3ebf727fe8d8aa2bcc64dab4ebe2"
	I1219 03:23:52.767925  367389 cri.go:92] found id: "1a79f7aa9ddca8538f5c2e6b027ef80376da81a39357bcf46549ad8c98b71cf4"
	I1219 03:23:52.767930  367389 cri.go:92] found id: "43a7239d34381053665a57780cf9a1e8bf2693f67351d58a11eb8f0a2008e906"
	I1219 03:23:52.767934  367389 cri.go:92] found id: "c787e566a13574b826852efdc66cad3575fec6831fdc3a03d85531ffb5a730c9"
	I1219 03:23:52.767939  367389 cri.go:92] found id: "572a9a98a5b172d9fdd8fb35ee68c37988f906c342e70e89e9bc008440b9c471"
	I1219 03:23:52.767949  367389 cri.go:92] found id: "162ae6553f9ecd77ec53fa67c114215b76b2baa657cb81e65ce4554f0273417d"
	I1219 03:23:52.767960  367389 cri.go:92] found id: ""
	I1219 03:23:52.768007  367389 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 03:23:52.779775  367389 retry.go:31] will retry after 315.230645ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:23:52Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:23:53.095283  367389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:23:53.114683  367389 pause.go:52] kubelet running: false
	I1219 03:23:53.114767  367389 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1219 03:23:53.288981  367389 cri.go:57] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1219 03:23:53.289080  367389 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1219 03:23:53.366397  367389 cri.go:92] found id: "b58c35740f2bd58f8c06629e5842dc5bfb5070615f14eabac5641c2021fc5622"
	I1219 03:23:53.366424  367389 cri.go:92] found id: "8040658b9f3eabbbb5ba47f09aef99df774aaa0bfb4e998b1fb75e4a9d80669f"
	I1219 03:23:53.366430  367389 cri.go:92] found id: "9243551aa2fc190ba88910a889833487b7b7643a56fe7be4cf6f3b57fe4c6fb8"
	I1219 03:23:53.366435  367389 cri.go:92] found id: "9a529209e91c73dd4753533a0684e92d16e448ef5aed74f450f82867ec17304c"
	I1219 03:23:53.366445  367389 cri.go:92] found id: "4a2a86182d6e0257a9f9f666cddbee5bfd7ae7855be029a79247541fbec34a4d"
	I1219 03:23:53.366449  367389 cri.go:92] found id: "ba54120ef227f793b4872b17bf830934ba52ead424688a987fd2abd98fa2cc0e"
	I1219 03:23:53.366452  367389 cri.go:92] found id: "dca7ec4a11ad9581ca73f7dc0a44569b981b375142761b6af7e7d3724cdbe386"
	I1219 03:23:53.366455  367389 cri.go:92] found id: "6764bc2ee8b6d353f059a22bdc0c53e92e10b6672c29a9f618bd8d9812741d3b"
	I1219 03:23:53.366457  367389 cri.go:92] found id: "e80d5d62bfdccfdfe3b1ea4a0e9ada6c25b66f5f5a6a80697856ea062adbe100"
	I1219 03:23:53.366469  367389 cri.go:92] found id: "9757437ad1c1d294af43014db84d4705543c3ebf727fe8d8aa2bcc64dab4ebe2"
	I1219 03:23:53.366481  367389 cri.go:92] found id: "1a79f7aa9ddca8538f5c2e6b027ef80376da81a39357bcf46549ad8c98b71cf4"
	I1219 03:23:53.366483  367389 cri.go:92] found id: "43a7239d34381053665a57780cf9a1e8bf2693f67351d58a11eb8f0a2008e906"
	I1219 03:23:53.366486  367389 cri.go:92] found id: "c787e566a13574b826852efdc66cad3575fec6831fdc3a03d85531ffb5a730c9"
	I1219 03:23:53.366489  367389 cri.go:92] found id: "572a9a98a5b172d9fdd8fb35ee68c37988f906c342e70e89e9bc008440b9c471"
	I1219 03:23:53.366491  367389 cri.go:92] found id: "162ae6553f9ecd77ec53fa67c114215b76b2baa657cb81e65ce4554f0273417d"
	I1219 03:23:53.366497  367389 cri.go:92] found id: ""
	I1219 03:23:53.366526  367389 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 03:23:53.380531  367389 out.go:203] 
	W1219 03:23:53.381840  367389 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:23:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:23:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 03:23:53.381862  367389 out.go:285] * 
	* 
	W1219 03:23:53.386236  367389 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 03:23:53.387778  367389 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-433330 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-433330
helpers_test.go:244: (dbg) docker inspect old-k8s-version-433330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18",
	        "Created": "2025-12-19T03:03:42.290394762Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 338430,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:05:00.142567023Z",
	            "FinishedAt": "2025-12-19T03:04:59.042546116Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18/hosts",
	        "LogPath": "/var/lib/docker/containers/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18-json.log",
	        "Name": "/old-k8s-version-433330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-433330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-433330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18",
	                "LowerDir": "/var/lib/docker/overlay2/6032c39fb2e99c6be5b0226eff9b2f93ff205fafe5352aa25ebce819f3079e50-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6032c39fb2e99c6be5b0226eff9b2f93ff205fafe5352aa25ebce819f3079e50/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6032c39fb2e99c6be5b0226eff9b2f93ff205fafe5352aa25ebce819f3079e50/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6032c39fb2e99c6be5b0226eff9b2f93ff205fafe5352aa25ebce819f3079e50/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-433330",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-433330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-433330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-433330",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-433330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "dccc35fac12f6f9c606670826d973be968de80e11b47147853405d102ecda025",
	            "SandboxKey": "/var/run/docker/netns/dccc35fac12f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-433330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ebf807015d65c8db1230e3a313a61194a5685b902dee458d727805bc340fe33d",
	                    "EndpointID": "a6443b6616b36367152fe2b3630db96df1ad95a1774c32a4f279e3a106c8f1e6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "8e:3f:cd:fb:94:67",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-433330",
	                        "ed00f1899233"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-433330 -n old-k8s-version-433330
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-433330 -n old-k8s-version-433330: exit status 2 (357.087904ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-433330 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-433330 logs -n 25: (1.314133824s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-821749 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ ssh     │ -p custom-flannel-821749 sudo crio config                                                                                                                                                                                                     │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ delete  │ -p custom-flannel-821749                                                                                                                                                                                                                      │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-433330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-278042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ delete  │ -p disable-driver-mounts-507648                                                                                                                                                                                                               │ disable-driver-mounts-507648 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ stop    │ -p old-k8s-version-433330 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-278042 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-433330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-433330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p no-preload-278042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-278042 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p embed-certs-805185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p embed-certs-805185 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-717222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-717222 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-805185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-717222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ image   │ old-k8s-version-433330 image list --format=json                                                                                                                                                                                               │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p old-k8s-version-433330 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:05:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:05:53.092301  352121 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:05:53.092394  352121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:53.092398  352121 out.go:374] Setting ErrFile to fd 2...
	I1219 03:05:53.092402  352121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:53.092674  352121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:05:53.093206  352121 out.go:368] Setting JSON to false
	I1219 03:05:53.094527  352121 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2904,"bootTime":1766110649,"procs":396,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:05:53.094626  352121 start.go:143] virtualization: kvm guest
	I1219 03:05:53.096521  352121 out.go:179] * [default-k8s-diff-port-717222] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:05:53.097989  352121 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:05:53.098033  352121 notify.go:221] Checking for updates...
	I1219 03:05:53.100179  352121 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:05:53.101370  352121 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:53.102535  352121 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:05:53.104239  352121 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:05:53.105473  352121 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:05:53.107110  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:53.107760  352121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:05:53.140217  352121 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:05:53.140357  352121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:53.211137  352121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 03:05:53.198937136 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:53.211231  352121 docker.go:319] overlay module found
	I1219 03:05:53.212919  352121 out.go:179] * Using the docker driver based on existing profile
	I1219 03:05:53.214070  352121 start.go:309] selected driver: docker
	I1219 03:05:53.214085  352121 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:53.214190  352121 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:05:53.214819  352121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:53.291099  352121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 03:05:53.277402722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:53.291489  352121 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:53.291541  352121 cni.go:84] Creating CNI manager for ""
	I1219 03:05:53.291616  352121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:53.291670  352121 start.go:353] cluster config:
	{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:53.293313  352121 out.go:179] * Starting "default-k8s-diff-port-717222" primary control-plane node in "default-k8s-diff-port-717222" cluster
	I1219 03:05:53.300833  352121 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:05:53.301994  352121 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:05:53.303248  352121 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:05:53.303295  352121 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 03:05:53.303311  352121 cache.go:65] Caching tarball of preloaded images
	I1219 03:05:53.303351  352121 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:05:53.303428  352121 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:05:53.303442  352121 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 03:05:53.303571  352121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json ...
	I1219 03:05:53.330621  352121 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:05:53.330652  352121 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:05:53.330671  352121 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:05:53.330767  352121 start.go:360] acquireMachinesLock for default-k8s-diff-port-717222: {Name:mk586c44d14a94c58ce6de7426a02f7319eb0716 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:53.330860  352121 start.go:364] duration metric: took 49.392µs to acquireMachinesLock for "default-k8s-diff-port-717222"
	I1219 03:05:53.330886  352121 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:05:53.330893  352121 fix.go:54] fixHost starting: 
	I1219 03:05:53.331229  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:53.353305  352121 fix.go:112] recreateIfNeeded on default-k8s-diff-port-717222: state=Stopped err=<nil>
	W1219 03:05:53.353340  352121 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:05:52.784975  350034 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:52.785039  350034 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:05:52.785129  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.788781  350034 addons.go:239] Setting addon default-storageclass=true in "embed-certs-805185"
	W1219 03:05:52.788860  350034 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:05:52.788932  350034 host.go:66] Checking if "embed-certs-805185" exists ...
	I1219 03:05:52.789603  350034 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:52.803693  350034 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:52.803884  350034 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:05:52.803963  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.829487  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.833372  350034 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:52.833419  350034 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:05:52.833502  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.838747  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.863871  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.921723  350034 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:52.937553  350034 node_ready.go:35] waiting up to 6m0s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:52.960476  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:52.967329  350034 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:05:52.987994  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:54.352944  350034 node_ready.go:49] node "embed-certs-805185" is "Ready"
	I1219 03:05:54.352990  350034 node_ready.go:38] duration metric: took 1.41538432s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:54.353006  350034 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:54.353065  350034 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:05:54.864404  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.903892508s)
	I1219 03:05:54.864464  350034 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.897096827s)
	I1219 03:05:54.864523  350034 api_server.go:72] duration metric: took 2.115189515s to wait for apiserver process to appear ...
	I1219 03:05:54.864541  350034 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:05:54.864542  350034 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:54.864474  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.876456755s)
	I1219 03:05:54.864569  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:54.868475  350034 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:05:54.869335  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:54.869359  350034 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:55.365237  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:55.371297  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:55.371332  350034 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:53.354901  352121 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-717222" ...
	I1219 03:05:53.354983  352121 cli_runner.go:164] Run: docker start default-k8s-diff-port-717222
	I1219 03:05:53.699823  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:53.723885  352121 kic.go:430] container "default-k8s-diff-port-717222" state is running.
	I1219 03:05:53.724437  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:53.748911  352121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json ...
	I1219 03:05:53.749250  352121 machine.go:94] provisionDockerMachine start ...
	I1219 03:05:53.749328  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:53.779816  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:53.780159  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:53.780174  352121 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:05:53.780830  352121 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1219 03:05:56.928761  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-717222
	
	I1219 03:05:56.928788  352121 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-717222"
	I1219 03:05:56.928865  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:56.949113  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:56.949333  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:56.949347  352121 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-717222 && echo "default-k8s-diff-port-717222" | sudo tee /etc/hostname
	I1219 03:05:57.104743  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-717222
	
	I1219 03:05:57.104815  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.123728  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:57.124049  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:57.124072  352121 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-717222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-717222/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-717222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:05:57.273322  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:05:57.273360  352121 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 03:05:57.273404  352121 ubuntu.go:190] setting up certificates
	I1219 03:05:57.273418  352121 provision.go:84] configureAuth start
	I1219 03:05:57.273481  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:57.295306  352121 provision.go:143] copyHostCerts
	I1219 03:05:57.295369  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem, removing ...
	I1219 03:05:57.295386  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem
	I1219 03:05:57.295456  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 03:05:57.295579  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem, removing ...
	I1219 03:05:57.295589  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem
	I1219 03:05:57.295628  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 03:05:57.295782  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem, removing ...
	I1219 03:05:57.295804  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem
	I1219 03:05:57.295852  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 03:05:57.295936  352121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-717222 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-717222 localhost minikube]
	I1219 03:05:57.394982  352121 provision.go:177] copyRemoteCerts
	I1219 03:05:57.395055  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:05:57.395092  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.415801  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:57.518597  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:05:57.537101  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:05:57.555959  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:05:57.575813  352121 provision.go:87] duration metric: took 302.381293ms to configureAuth
	I1219 03:05:57.575844  352121 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:05:57.576048  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:57.576149  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.595953  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:57.596182  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:57.596200  352121 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:05:58.066496  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:05:58.066522  352121 machine.go:97] duration metric: took 4.317251418s to provisionDockerMachine
	I1219 03:05:58.066537  352121 start.go:293] postStartSetup for "default-k8s-diff-port-717222" (driver="docker")
	I1219 03:05:58.066550  352121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:05:58.066636  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:05:58.066688  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.089735  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:55.753357  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:05:55.865655  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:55.871156  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1219 03:05:55.872281  350034 api_server.go:141] control plane version: v1.34.3
	I1219 03:05:55.872314  350034 api_server.go:131] duration metric: took 1.007762314s to wait for apiserver health ...
	I1219 03:05:55.872322  350034 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:55.876404  350034 system_pods.go:59] 8 kube-system pods found
	I1219 03:05:55.876440  350034 system_pods.go:61] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:55.876448  350034 system_pods.go:61] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:55.876455  350034 system_pods.go:61] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:55.876464  350034 system_pods.go:61] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:55.876469  350034 system_pods.go:61] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:55.876474  350034 system_pods.go:61] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:55.876484  350034 system_pods.go:61] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:55.876491  350034 system_pods.go:61] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:55.876501  350034 system_pods.go:74] duration metric: took 4.173475ms to wait for pod list to return data ...
	I1219 03:05:55.876508  350034 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:55.879480  350034 default_sa.go:45] found service account: "default"
	I1219 03:05:55.879505  350034 default_sa.go:55] duration metric: took 2.991473ms for default service account to be created ...
	I1219 03:05:55.879514  350034 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:55.882762  350034 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:55.882799  350034 system_pods.go:89] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:55.882811  350034 system_pods.go:89] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:55.882822  350034 system_pods.go:89] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:55.882831  350034 system_pods.go:89] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:55.882844  350034 system_pods.go:89] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:55.882860  350034 system_pods.go:89] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:55.882873  350034 system_pods.go:89] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:55.882885  350034 system_pods.go:89] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:55.882898  350034 system_pods.go:126] duration metric: took 3.377076ms to wait for k8s-apps to be running ...
	I1219 03:05:55.882911  350034 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:55.882965  350034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:58.664447  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (2.910960851s)
	I1219 03:05:58.664545  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:58.664639  350034 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.781656633s)
	I1219 03:05:58.664656  350034 system_svc.go:56] duration metric: took 2.781742979s WaitForService to wait for kubelet
	I1219 03:05:58.664665  350034 kubeadm.go:587] duration metric: took 5.915331625s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:58.664687  350034 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:58.675538  350034 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:58.675644  350034 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:58.675679  350034 node_conditions.go:105] duration metric: took 10.98619ms to run NodePressure ...
	I1219 03:05:58.675732  350034 start.go:242] waiting for startup goroutines ...
	I1219 03:05:58.853716  350034 addons.go:500] Verifying addon dashboard=true in "embed-certs-805185"
	I1219 03:05:58.854075  350034 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:58.881488  350034 out.go:179] * Verifying dashboard addon...
	I1219 03:05:58.197271  352121 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:05:58.201133  352121 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:05:58.201178  352121 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:05:58.201190  352121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 03:05:58.201247  352121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 03:05:58.201317  352121 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem -> 85362.pem in /etc/ssl/certs
	I1219 03:05:58.201402  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:05:58.208949  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:05:58.226900  352121 start.go:296] duration metric: took 160.349121ms for postStartSetup
	I1219 03:05:58.227004  352121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:05:58.227051  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.247047  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.350362  352121 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:05:58.356036  352121 fix.go:56] duration metric: took 5.025136862s for fixHost
	I1219 03:05:58.356065  352121 start.go:83] releasing machines lock for "default-k8s-diff-port-717222", held for 5.025188602s
	I1219 03:05:58.356141  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:58.375473  352121 ssh_runner.go:195] Run: cat /version.json
	I1219 03:05:58.375532  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.375568  352121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:05:58.375641  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.397142  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.397516  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.556811  352121 ssh_runner.go:195] Run: systemctl --version
	I1219 03:05:58.565677  352121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:05:58.614666  352121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:05:58.621187  352121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:05:58.621270  352121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:05:58.633014  352121 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:05:58.633047  352121 start.go:496] detecting cgroup driver to use...
	I1219 03:05:58.633088  352121 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:05:58.633137  352121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:05:58.657800  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:05:58.678804  352121 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:05:58.678883  352121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:05:58.705625  352121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:05:58.728926  352121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:05:58.831932  352121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:05:58.929486  352121 docker.go:234] disabling docker service ...
	I1219 03:05:58.929541  352121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:05:58.944770  352121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:05:58.958306  352121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:05:59.062259  352121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:05:59.150570  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:05:59.163650  352121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:05:59.178297  352121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:05:59.178363  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.187297  352121 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 03:05:59.187364  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.198433  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.208772  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.217632  352121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:05:59.226347  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.236018  352121 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.245884  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.255295  352121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:05:59.263398  352121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:05:59.271671  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:59.372273  352121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:05:59.546574  352121 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:05:59.546667  352121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:05:59.552176  352121 start.go:564] Will wait 60s for crictl version
	I1219 03:05:59.552257  352121 ssh_runner.go:195] Run: which crictl
	I1219 03:05:59.557121  352121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:05:59.591016  352121 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 03:05:59.591106  352121 ssh_runner.go:195] Run: crio --version
	I1219 03:05:59.628329  352121 ssh_runner.go:195] Run: crio --version
	I1219 03:05:59.666833  352121 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1219 03:05:58.884349  350034 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:05:58.887370  350034 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:05:58.887396  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.389654  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.888847  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.388430  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.668307  352121 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-717222 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:05:59.692145  352121 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1219 03:05:59.697760  352121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:59.712259  352121 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:05:59.712414  352121 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:05:59.712469  352121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:59.758952  352121 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:59.758977  352121 crio.go:433] Images already preloaded, skipping extraction
	I1219 03:05:59.759041  352121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:59.795008  352121 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:59.795037  352121 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:05:59.795046  352121 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.3 crio true true} ...
	I1219 03:05:59.795178  352121 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-717222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:05:59.795259  352121 ssh_runner.go:195] Run: crio config
	I1219 03:05:59.864102  352121 cni.go:84] Creating CNI manager for ""
	I1219 03:05:59.864128  352121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:59.864150  352121 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:05:59.864179  352121 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-717222 NodeName:default-k8s-diff-port-717222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:05:59.864362  352121 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-717222"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:05:59.864462  352121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:05:59.875851  352121 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:05:59.875922  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:05:59.884464  352121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1219 03:05:59.899134  352121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:05:59.915215  352121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1219 03:05:59.929131  352121 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:05:59.933193  352121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:59.944310  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:06:00.032535  352121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:06:00.050232  352121 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222 for IP: 192.168.94.2
	I1219 03:06:00.050255  352121 certs.go:195] generating shared ca certs ...
	I1219 03:06:00.050283  352121 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.050682  352121 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 03:06:00.050862  352121 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 03:06:00.050911  352121 certs.go:257] generating profile certs ...
	I1219 03:06:00.051065  352121 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/client.key
	I1219 03:06:00.051146  352121 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.key.a5493f25
	I1219 03:06:00.051208  352121 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.key
	I1219 03:06:00.051349  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem (1338 bytes)
	W1219 03:06:00.051384  352121 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536_empty.pem, impossibly tiny 0 bytes
	I1219 03:06:00.051393  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:06:00.051430  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:06:00.051457  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:06:00.051489  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 03:06:00.051550  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:06:00.053211  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:06:00.075839  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:06:00.100173  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:06:00.124618  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 03:06:00.149956  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:06:00.170353  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:06:00.189275  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:06:00.209284  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 03:06:00.228628  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:06:00.250172  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem --> /usr/share/ca-certificates/8536.pem (1338 bytes)
	I1219 03:06:00.275836  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /usr/share/ca-certificates/85362.pem (1708 bytes)
	I1219 03:06:00.299659  352121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:06:00.316102  352121 ssh_runner.go:195] Run: openssl version
	I1219 03:06:00.322905  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.332781  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:06:00.343197  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.348016  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.348075  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.393419  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:06:00.402299  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.411729  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8536.pem /etc/ssl/certs/8536.pem
	I1219 03:06:00.420351  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.424769  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:33 /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.424832  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.477146  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:06:00.485938  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.495504  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85362.pem /etc/ssl/certs/85362.pem
	I1219 03:06:00.504981  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.509807  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:33 /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.509870  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.553357  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:06:00.563012  352121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:06:00.568396  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:06:00.622821  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:06:00.682127  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:06:00.745366  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:06:00.806418  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:06:00.854164  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:06:00.896520  352121 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:06:00.896640  352121 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:06:00.896757  352121 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:06:00.932488  352121 cri.go:92] found id: "1340a2f59347d94cb67897c14b11276bb491943caa3b94ed7080db25e5d5dc78"
	I1219 03:06:00.932513  352121 cri.go:92] found id: "725faee3812c5d3136b764571b2c9b270517680beb590d21de31aad0b5d0b89a"
	I1219 03:06:00.932519  352121 cri.go:92] found id: "d2c496c53c69616e93f161ca59c69ebaa3d4de81e94460893270a29e321886eb"
	I1219 03:06:00.932523  352121 cri.go:92] found id: "0fb4e8910a64fc9fe34f2319ccb58566695768a8c137e01b391e3004481cb992"
	I1219 03:06:00.932538  352121 cri.go:92] found id: ""
	I1219 03:06:00.932585  352121 ssh_runner.go:195] Run: sudo runc list -f json
	W1219 03:06:00.949831  352121 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:06:00Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:06:00.949907  352121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:06:00.960453  352121 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:06:00.960473  352121 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:06:00.960520  352121 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:06:00.969747  352121 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:06:00.971295  352121 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-717222" does not appear in /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:06:00.972255  352121 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-4987/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-717222" cluster setting kubeconfig missing "default-k8s-diff-port-717222" context setting]
	I1219 03:06:00.973245  352121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.975092  352121 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:06:00.986333  352121 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1219 03:06:00.986370  352121 kubeadm.go:602] duration metric: took 25.891084ms to restartPrimaryControlPlane
	I1219 03:06:00.986381  352121 kubeadm.go:403] duration metric: took 89.870495ms to StartCluster
	I1219 03:06:00.986399  352121 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.986465  352121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:06:00.988984  352121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.989885  352121 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:06:00.990020  352121 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:06:00.990126  352121 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990162  352121 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-717222"
	W1219 03:06:00.990171  352121 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:06:00.990156  352121 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990202  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:00.990217  352121 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-717222"
	W1219 03:06:00.990229  352121 addons.go:248] addon dashboard should already be in state true
	I1219 03:06:00.990249  352121 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990136  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:06:00.990260  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:00.990288  352121 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-717222"
	I1219 03:06:00.990658  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.990728  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.990961  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.993354  352121 out.go:179] * Verifying Kubernetes components...
	I1219 03:06:00.994888  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:06:01.021617  352121 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:06:01.021640  352121 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:06:01.021711  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.025911  352121 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:06:01.027271  352121 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:06:01.027318  352121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:06:01.027479  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.027961  352121 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-717222"
	W1219 03:06:01.027983  352121 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:06:01.028009  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:01.028455  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:01.058853  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.066205  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.072839  352121 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:06:01.072864  352121 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:06:01.072921  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.100032  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.171523  352121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:06:01.187709  352121 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:06:01.188379  352121 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:06:01.193383  352121 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:06:01.197836  352121 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:06:01.203342  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:06:01.235359  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:06:02.386268  352121 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.188390942s)
	I1219 03:06:02.386368  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:06:03.037231  352121 node_ready.go:49] node "default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:03.037269  352121 node_ready.go:38] duration metric: took 1.848860443s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:06:03.037285  352121 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:06:03.037340  352121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:06:00.888463  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.390478  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.889172  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.392135  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.887803  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.395356  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.887597  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.388564  350034 kapi.go:107] duration metric: took 5.504214625s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:06:04.390500  350034 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-805185 addons enable metrics-server
	
	I1219 03:06:04.392729  350034 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1219 03:06:04.394312  350034 addons.go:546] duration metric: took 11.644922855s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:06:04.394358  350034 start.go:247] waiting for cluster config update ...
	I1219 03:06:04.394374  350034 start.go:256] writing updated cluster config ...
	I1219 03:06:04.394679  350034 ssh_runner.go:195] Run: rm -f paused
	I1219 03:06:04.399689  350034 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:04.404337  350034 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:03.829353  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.625958762s)
	I1219 03:06:03.829435  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.59404762s)
	I1219 03:06:06.147413  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.761002493s)
	I1219 03:06:06.147503  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:06:06.147635  352121 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.110275887s)
	I1219 03:06:06.147788  352121 api_server.go:72] duration metric: took 5.157862117s to wait for apiserver process to appear ...
	I1219 03:06:06.147803  352121 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:06:06.147822  352121 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:06:06.158489  352121 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1219 03:06:06.160663  352121 api_server.go:141] control plane version: v1.34.3
	I1219 03:06:06.160691  352121 api_server.go:131] duration metric: took 12.881794ms to wait for apiserver health ...
	I1219 03:06:06.160726  352121 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:06:06.167594  352121 system_pods.go:59] 8 kube-system pods found
	I1219 03:06:06.167645  352121 system_pods.go:61] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:06:06.167662  352121 system_pods.go:61] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:06:06.167671  352121 system_pods.go:61] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:06:06.167680  352121 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:06:06.167689  352121 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:06:06.167695  352121 system_pods.go:61] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:06:06.167733  352121 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:06:06.167740  352121 system_pods.go:61] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:06:06.167750  352121 system_pods.go:74] duration metric: took 7.015044ms to wait for pod list to return data ...
	I1219 03:06:06.167763  352121 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:06:06.172355  352121 default_sa.go:45] found service account: "default"
	I1219 03:06:06.172383  352121 default_sa.go:55] duration metric: took 4.612941ms for default service account to be created ...
	I1219 03:06:06.172394  352121 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:06:06.266672  352121 system_pods.go:86] 8 kube-system pods found
	I1219 03:06:06.266726  352121 system_pods.go:89] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:06:06.266739  352121 system_pods.go:89] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:06:06.266748  352121 system_pods.go:89] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:06:06.266766  352121 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:06:06.266780  352121 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:06:06.266794  352121 system_pods.go:89] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:06:06.266807  352121 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:06:06.266814  352121 system_pods.go:89] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:06:06.266828  352121 system_pods.go:126] duration metric: took 94.426368ms to wait for k8s-apps to be running ...
	I1219 03:06:06.266837  352121 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:06:06.266897  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:06:06.349488  352121 system_svc.go:56] duration metric: took 82.641061ms WaitForService to wait for kubelet
	I1219 03:06:06.349525  352121 kubeadm.go:587] duration metric: took 5.359602561s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:06:06.349549  352121 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:06:06.350103  352121 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-717222"
	I1219 03:06:06.350491  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:06.365789  352121 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:06:06.365826  352121 node_conditions.go:123] node cpu capacity is 8
	I1219 03:06:06.365844  352121 node_conditions.go:105] duration metric: took 16.288389ms to run NodePressure ...
	I1219 03:06:06.365861  352121 start.go:242] waiting for startup goroutines ...
	I1219 03:06:06.386126  352121 out.go:179] * Verifying dashboard addon...
	I1219 03:06:06.389114  352121 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:06:06.393439  352121 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:06:07.395667  352121 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:06:07.395691  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.895613  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:06:06.411012  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:08.413428  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:08.393607  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.894323  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.395092  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.892933  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.393259  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.894360  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.394632  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.893492  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.393217  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.892698  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.423636  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.893633  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.392860  352121 kapi.go:107] duration metric: took 8.003743298s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:06:14.394678  352121 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-717222 addons enable metrics-server
	
	I1219 03:06:14.396056  352121 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1219 03:06:10.911668  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:13.410917  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:15.411844  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:14.397218  352121 addons.go:546] duration metric: took 13.407198611s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:06:14.397266  352121 start.go:247] waiting for cluster config update ...
	I1219 03:06:14.397281  352121 start.go:256] writing updated cluster config ...
	I1219 03:06:14.397606  352121 ssh_runner.go:195] Run: rm -f paused
	I1219 03:06:14.402992  352121 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:14.409994  352121 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	W1219 03:06:16.416124  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:17.411983  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:19.909949  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:18.416198  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:20.916272  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:21.910435  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:24.410080  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:23.415501  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:25.416077  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:27.916104  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:26.910790  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:29.410901  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:30.410731  350034 pod_ready.go:94] pod "coredns-66bc5c9577-8gphx" is "Ready"
	I1219 03:06:30.410767  350034 pod_ready.go:86] duration metric: took 26.00640693s for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.413681  350034 pod_ready.go:83] waiting for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.418650  350034 pod_ready.go:94] pod "etcd-embed-certs-805185" is "Ready"
	I1219 03:06:30.418674  350034 pod_ready.go:86] duration metric: took 4.958889ms for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.420801  350034 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.424880  350034 pod_ready.go:94] pod "kube-apiserver-embed-certs-805185" is "Ready"
	I1219 03:06:30.424905  350034 pod_ready.go:86] duration metric: took 4.082079ms for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.427435  350034 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.608611  350034 pod_ready.go:94] pod "kube-controller-manager-embed-certs-805185" is "Ready"
	I1219 03:06:30.608639  350034 pod_ready.go:86] duration metric: took 181.178673ms for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.808837  350034 pod_ready.go:83] waiting for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.208761  350034 pod_ready.go:94] pod "kube-proxy-p8pqg" is "Ready"
	I1219 03:06:31.208794  350034 pod_ready.go:86] duration metric: took 399.926497ms for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.408111  350034 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.808631  350034 pod_ready.go:94] pod "kube-scheduler-embed-certs-805185" is "Ready"
	I1219 03:06:31.808661  350034 pod_ready.go:86] duration metric: took 400.520634ms for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.808676  350034 pod_ready.go:40] duration metric: took 27.408945135s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:31.854031  350034 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:06:31.855666  350034 out.go:179] * Done! kubectl is now configured to use "embed-certs-805185" cluster and "default" namespace by default
	W1219 03:06:29.916349  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:31.916589  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:34.416790  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:36.915948  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	I1219 03:06:37.416133  352121 pod_ready.go:94] pod "coredns-66bc5c9577-dskxl" is "Ready"
	I1219 03:06:37.416166  352121 pod_ready.go:86] duration metric: took 23.006142155s for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.418804  352121 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.423037  352121 pod_ready.go:94] pod "etcd-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.423066  352121 pod_ready.go:86] duration metric: took 4.235839ms for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.425227  352121 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.429099  352121 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.429133  352121 pod_ready.go:86] duration metric: took 3.885497ms for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.430894  352121 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.614208  352121 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.614234  352121 pod_ready.go:86] duration metric: took 183.319695ms for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.814884  352121 pod_ready.go:83] waiting for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.214236  352121 pod_ready.go:94] pod "kube-proxy-mr7c8" is "Ready"
	I1219 03:06:38.214264  352121 pod_ready.go:86] duration metric: took 399.351737ms for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.413883  352121 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.813953  352121 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:38.813980  352121 pod_ready.go:86] duration metric: took 400.070957ms for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.813992  352121 pod_ready.go:40] duration metric: took 24.410965975s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:38.857548  352121 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:06:38.860155  352121 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-717222" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.294882463Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2" id=b149fd9d-fd72-4e11-adb2-25e489e6bf82 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.296980103Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2/kubernetes-dashboard-metrics-scraper" id=19f10fdd-5916-41c8-a26a-2ff94d29c5dc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.297143775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.301522987Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.302174856Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.312945079Z" level=info msg="Created container 1a79f7aa9ddca8538f5c2e6b027ef80376da81a39357bcf46549ad8c98b71cf4: kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn/proxy" id=8f1aef5d-9910-4677-95e2-3ddd26dbad0a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.31363451Z" level=info msg="Starting container: 1a79f7aa9ddca8538f5c2e6b027ef80376da81a39357bcf46549ad8c98b71cf4" id=8be5d998-255e-4a4b-8020-90d42f9fb352 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.316425544Z" level=info msg="Started container" PID=1962 containerID=1a79f7aa9ddca8538f5c2e6b027ef80376da81a39357bcf46549ad8c98b71cf4 description=kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn/proxy id=8be5d998-255e-4a4b-8020-90d42f9fb352 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5b4016d036c099501205c1263d738aec355ca9ba0985ac0de1a6326f1ba60f4f
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.32575797Z" level=info msg="Created container 9757437ad1c1d294af43014db84d4705543c3ebf727fe8d8aa2bcc64dab4ebe2: kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2/kubernetes-dashboard-metrics-scraper" id=19f10fdd-5916-41c8-a26a-2ff94d29c5dc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.326784172Z" level=info msg="Starting container: 9757437ad1c1d294af43014db84d4705543c3ebf727fe8d8aa2bcc64dab4ebe2" id=c58dd769-fa87-40be-a2b3-85b8605bc579 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.329520518Z" level=info msg="Started container" PID=1967 containerID=9757437ad1c1d294af43014db84d4705543c3ebf727fe8d8aa2bcc64dab4ebe2 description=kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2/kubernetes-dashboard-metrics-scraper id=c58dd769-fa87-40be-a2b3-85b8605bc579 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b5b7f0901c4eba07cb72103c3ef6c2da1dd3e8c1ae0cbe501ab5646ede4e16ae
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.151028864Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cbd04026-4973-4fb2-a2f5-e1a0bcef1d04 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.152401329Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5d524859-0cd0-482d-8890-c3a0b5bfcadf name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.153497878Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=476303b0-0df5-44b1-9467-56549e89f9fa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.153634163Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.15821817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.158364577Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/fc5e20521cad9e4c78b487769b301845071ca906b069eb3bccc1e9a717925eaf/merged/etc/passwd: no such file or directory"
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.1583869Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fc5e20521cad9e4c78b487769b301845071ca906b069eb3bccc1e9a717925eaf/merged/etc/group: no such file or directory"
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.158596016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.189477862Z" level=info msg="Created container b58c35740f2bd58f8c06629e5842dc5bfb5070615f14eabac5641c2021fc5622: kube-system/storage-provisioner/storage-provisioner" id=476303b0-0df5-44b1-9467-56549e89f9fa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.190263305Z" level=info msg="Starting container: b58c35740f2bd58f8c06629e5842dc5bfb5070615f14eabac5641c2021fc5622" id=094675cc-8230-4bbb-9611-af75aa9fbdb9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.192298533Z" level=info msg="Started container" PID=3386 containerID=b58c35740f2bd58f8c06629e5842dc5bfb5070615f14eabac5641c2021fc5622 description=kube-system/storage-provisioner/storage-provisioner id=094675cc-8230-4bbb-9611-af75aa9fbdb9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0546164e8f444b2265480d306eeac5a7944c866d22f7a7daa5d4a8a97d59bd1
	Dec 19 03:10:06 old-k8s-version-433330 crio[565]: time="2025-12-19T03:10:06.979473429Z" level=info msg="Checking image status: registry.k8s.io/pause:3.9" id=88b602ee-9bb9-4765-ba4b-8f37a46dfeb9 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:15:06 old-k8s-version-433330 crio[565]: time="2025-12-19T03:15:06.983672919Z" level=info msg="Checking image status: registry.k8s.io/pause:3.9" id=3ec44638-8e03-4c18-8174-4cc031367aa5 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:20:06 old-k8s-version-433330 crio[565]: time="2025-12-19T03:20:06.987998685Z" level=info msg="Checking image status: registry.k8s.io/pause:3.9" id=691148af-39d8-427d-99b8-393bcb276786 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	b58c35740f2bd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Running             storage-provisioner                    1                   c0546164e8f44       storage-provisioner                                     kube-system
	9757437ad1c1d       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   18 minutes ago      Running             kubernetes-dashboard-metrics-scraper   0                   b5b7f0901c4eb       kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2   kubernetes-dashboard
	1a79f7aa9ddca       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                           18 minutes ago      Running             proxy                                  0                   5b4016d036c09       kubernetes-dashboard-kong-f487b85cd-7vrxn               kubernetes-dashboard
	43a7239d34381       docker.io/library/kong@sha256:73ac10ce4d2c5b3b8b4acd6c8117b4e72d1a201d95be2d51aeae8324d776a108                             18 minutes ago      Exited              clear-stale-pid                        0                   5b4016d036c09       kubernetes-dashboard-kong-f487b85cd-7vrxn               kubernetes-dashboard
	c787e566a1357       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              18 minutes ago      Running             kubernetes-dashboard-auth              0                   1b21bd00ecbe5       kubernetes-dashboard-auth-96f55cbc9-q6w55               kubernetes-dashboard
	572a9a98a5b17       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               18 minutes ago      Running             kubernetes-dashboard-api               0                   2598675df2023       kubernetes-dashboard-api-6c85dd6d79-gplb7               kubernetes-dashboard
	162ae6553f9ec       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               18 minutes ago      Running             kubernetes-dashboard-web               0                   1832855b57889       kubernetes-dashboard-web-858bd7466-nt8k8                kubernetes-dashboard
	8040658b9f3ea       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                           18 minutes ago      Running             coredns                                0                   c68d596bc4c32       coredns-5dd5756b68-vp79f                                kube-system
	e0cd612dc1ee9       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                                           18 minutes ago      Running             busybox                                1                   a960ed231cfff       busybox                                                 default
	9243551aa2fc1       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                                           18 minutes ago      Running             kindnet-cni                            0                   83c7dbba43d07       kindnet-hm2sz                                           kube-system
	9a529209e91c7       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                                           18 minutes ago      Running             kube-proxy                             0                   2bfa6386c24f2       kube-proxy-wdrk8                                        kube-system
	4a2a86182d6e0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Exited              storage-provisioner                    0                   c0546164e8f44       storage-provisioner                                     kube-system
	ba54120ef227f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                           18 minutes ago      Running             etcd                                   0                   e4fbd268e41d9       etcd-old-k8s-version-433330                             kube-system
	dca7ec4a11ad9       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                                           18 minutes ago      Running             kube-controller-manager                0                   2ebbf830bac83       kube-controller-manager-old-k8s-version-433330          kube-system
	6764bc2ee8b6d       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                                           18 minutes ago      Running             kube-scheduler                         0                   b8ce7eb1e0991       kube-scheduler-old-k8s-version-433330                   kube-system
	e80d5d62bfdcc       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                                           18 minutes ago      Running             kube-apiserver                         0                   5a193f007e64f       kube-apiserver-old-k8s-version-433330                   kube-system
	
	
	==> coredns [8040658b9f3eabbbb5ba47f09aef99df774aaa0bfb4e998b1fb75e4a9d80669f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41940 - 34117 "HINFO IN 2692397503380385834.233192437307976356. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.044493269s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-433330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-433330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=old-k8s-version-433330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_04_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:03:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-433330
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:23:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:21:00 +0000   Fri, 19 Dec 2025 03:03:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:21:00 +0000   Fri, 19 Dec 2025 03:03:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:21:00 +0000   Fri, 19 Dec 2025 03:03:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:21:00 +0000   Fri, 19 Dec 2025 03:04:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-433330
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                51a7519b-85cf-4ec7-8319-8a51b3632490
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-5dd5756b68-vp79f                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-old-k8s-version-433330                              100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-hm2sz                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-old-k8s-version-433330                    250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-old-k8s-version-433330           200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-wdrk8                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-old-k8s-version-433330                    100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        kubernetes-dashboard-api-6c85dd6d79-gplb7                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-auth-96f55cbc9-q6w55                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-kong-f487b85cd-7vrxn                0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-web-858bd7466-nt8k8                 100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1250m (15%)  1100m (13%)
	  memory             1020Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node old-k8s-version-433330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x8 over 19m)  kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node old-k8s-version-433330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m                kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           19m                node-controller  Node old-k8s-version-433330 event: Registered Node old-k8s-version-433330 in Controller
	  Normal  NodeReady                19m                kubelet          Node old-k8s-version-433330 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node old-k8s-version-433330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node old-k8s-version-433330 event: Registered Node old-k8s-version-433330 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [ba54120ef227f793b4872b17bf830934ba52ead424688a987fd2abd98fa2cc0e] <==
	{"level":"info","ts":"2025-12-19T03:05:23.716798Z","caller":"traceutil/trace.go:171","msg":"trace[1286006389] transaction","detail":"{read_only:false; response_revision:637; number_of_response:1; }","duration":"111.325708ms","start":"2025-12-19T03:05:23.605446Z","end":"2025-12-19T03:05:23.716772Z","steps":["trace[1286006389] 'process raft request'  (duration: 111.154063ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.762567Z","caller":"traceutil/trace.go:171","msg":"trace[1170228424] transaction","detail":"{read_only:false; response_revision:638; number_of_response:1; }","duration":"156.711191ms","start":"2025-12-19T03:05:23.605773Z","end":"2025-12-19T03:05:23.762484Z","steps":["trace[1170228424] 'process raft request'  (duration: 156.477047ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.76258Z","caller":"traceutil/trace.go:171","msg":"trace[176958629] transaction","detail":"{read_only:false; response_revision:640; number_of_response:1; }","duration":"155.653437ms","start":"2025-12-19T03:05:23.606903Z","end":"2025-12-19T03:05:23.762556Z","steps":["trace[176958629] 'process raft request'  (duration: 155.495851ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.762606Z","caller":"traceutil/trace.go:171","msg":"trace[11901299] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"154.359966ms","start":"2025-12-19T03:05:23.608234Z","end":"2025-12-19T03:05:23.762594Z","steps":["trace[11901299] 'process raft request'  (duration: 154.193134ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:05:23.762855Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.14879ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:05:23.76292Z","caller":"traceutil/trace.go:171","msg":"trace[491680101] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:641; }","duration":"100.257204ms","start":"2025-12-19T03:05:23.662645Z","end":"2025-12-19T03:05:23.762902Z","steps":["trace[491680101] 'agreement among raft nodes before linearized reading'  (duration: 100.093535ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.763103Z","caller":"traceutil/trace.go:171","msg":"trace[1326394039] transaction","detail":"{read_only:false; response_revision:639; number_of_response:1; }","duration":"156.274686ms","start":"2025-12-19T03:05:23.606816Z","end":"2025-12-19T03:05:23.763091Z","steps":["trace[1326394039] 'process raft request'  (duration: 155.543051ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.926373Z","caller":"traceutil/trace.go:171","msg":"trace[923941046] linearizableReadLoop","detail":"{readStateIndex:670; appliedIndex:668; }","duration":"163.896791ms","start":"2025-12-19T03:05:23.762458Z","end":"2025-12-19T03:05:23.926354Z","steps":["trace[923941046] 'read index received'  (duration: 90.331723ms)","trace[923941046] 'applied index is now lower than readState.Index'  (duration: 73.564544ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:05:23.926443Z","caller":"traceutil/trace.go:171","msg":"trace[1947040731] transaction","detail":"{read_only:false; response_revision:642; number_of_response:1; }","duration":"205.888361ms","start":"2025-12-19T03:05:23.720531Z","end":"2025-12-19T03:05:23.926419Z","steps":["trace[1947040731] 'process raft request'  (duration: 132.202751ms)","trace[1947040731] 'compare'  (duration: 73.474481ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:05:23.92647Z","caller":"traceutil/trace.go:171","msg":"trace[719632072] transaction","detail":"{read_only:false; response_revision:643; number_of_response:1; }","duration":"203.800384ms","start":"2025-12-19T03:05:23.722655Z","end":"2025-12-19T03:05:23.926455Z","steps":["trace[719632072] 'process raft request'  (duration: 203.652153ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:05:23.926492Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.716096ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kubernetes-dashboard/\" range_end:\"/registry/limitranges/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:05:23.926529Z","caller":"traceutil/trace.go:171","msg":"trace[291568890] range","detail":"{range_begin:/registry/limitranges/kubernetes-dashboard/; range_end:/registry/limitranges/kubernetes-dashboard0; response_count:0; response_revision:643; }","duration":"204.766821ms","start":"2025-12-19T03:05:23.721752Z","end":"2025-12-19T03:05:23.926519Z","steps":["trace[291568890] 'agreement among raft nodes before linearized reading'  (duration: 204.695193ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.950771Z","caller":"traceutil/trace.go:171","msg":"trace[910369966] transaction","detail":"{read_only:false; response_revision:645; number_of_response:1; }","duration":"179.377478ms","start":"2025-12-19T03:05:23.77138Z","end":"2025-12-19T03:05:23.950757Z","steps":["trace[910369966] 'process raft request'  (duration: 179.260563ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.950784Z","caller":"traceutil/trace.go:171","msg":"trace[4968190] transaction","detail":"{read_only:false; response_revision:644; number_of_response:1; }","duration":"179.447416ms","start":"2025-12-19T03:05:23.771287Z","end":"2025-12-19T03:05:23.950734Z","steps":["trace[4968190] 'process raft request'  (duration: 179.24612ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.951094Z","caller":"traceutil/trace.go:171","msg":"trace[108964002] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"179.505335ms","start":"2025-12-19T03:05:23.771577Z","end":"2025-12-19T03:05:23.951082Z","steps":["trace[108964002] 'process raft request'  (duration: 179.104746ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.951137Z","caller":"traceutil/trace.go:171","msg":"trace[652577346] transaction","detail":"{read_only:false; response_revision:647; number_of_response:1; }","duration":"176.993248ms","start":"2025-12-19T03:05:23.774131Z","end":"2025-12-19T03:05:23.951124Z","steps":["trace[652577346] 'process raft request'  (duration: 176.75032ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:05:23.951195Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.30836ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:05:23.951226Z","caller":"traceutil/trace.go:171","msg":"trace[1368537699] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:647; }","duration":"183.528611ms","start":"2025-12-19T03:05:23.767688Z","end":"2025-12-19T03:05:23.951216Z","steps":["trace[1368537699] 'agreement among raft nodes before linearized reading'  (duration: 183.469758ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:34.236332Z","caller":"traceutil/trace.go:171","msg":"trace[532828479] transaction","detail":"{read_only:false; response_revision:733; number_of_response:1; }","duration":"124.186623ms","start":"2025-12-19T03:05:34.112115Z","end":"2025-12-19T03:05:34.236302Z","steps":["trace[532828479] 'process raft request'  (duration: 124.016196ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:15:09.13417Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":975}
	{"level":"info","ts":"2025-12-19T03:15:09.136009Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":975,"took":"1.560442ms","hash":2911588948}
	{"level":"info","ts":"2025-12-19T03:15:09.13606Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2911588948,"revision":975,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:20:09.140625Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1214}
	{"level":"info","ts":"2025-12-19T03:20:09.141731Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1214,"took":"808.598µs","hash":1219419124}
	{"level":"info","ts":"2025-12-19T03:20:09.141763Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1219419124,"revision":1214,"compact-revision":975}
	
	
	==> kernel <==
	 03:23:54 up  1:06,  0 user,  load average: 0.42, 0.52, 1.13
	Linux old-k8s-version-433330 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9243551aa2fc190ba88910a889833487b7b7643a56fe7be4cf6f3b57fe4c6fb8] <==
	I1219 03:21:51.948941       1 main.go:301] handling current node
	I1219 03:22:01.952938       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:22:01.952967       1 main.go:301] handling current node
	I1219 03:22:11.944344       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:22:11.944378       1 main.go:301] handling current node
	I1219 03:22:21.943891       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:22:21.943924       1 main.go:301] handling current node
	I1219 03:22:31.951661       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:22:31.951736       1 main.go:301] handling current node
	I1219 03:22:41.944791       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:22:41.944829       1 main.go:301] handling current node
	I1219 03:22:51.946611       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:22:51.946640       1 main.go:301] handling current node
	I1219 03:23:01.952537       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:23:01.952570       1 main.go:301] handling current node
	I1219 03:23:11.946080       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:23:11.946118       1 main.go:301] handling current node
	I1219 03:23:21.947459       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:23:21.947498       1 main.go:301] handling current node
	I1219 03:23:31.952163       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:23:31.952196       1 main.go:301] handling current node
	I1219 03:23:41.944118       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:23:41.944176       1 main.go:301] handling current node
	I1219 03:23:51.949585       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:23:51.949617       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e80d5d62bfdccfdfe3b1ea4a0e9ada6c25b66f5f5a6a80697856ea062adbe100] <==
	I1219 03:10:10.583004       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:10:10.583125       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:10:10.583190       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:10:10.583344       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:10:10.583413       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:10:10.583485       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:10:10.583543       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:10:10.583600       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:10:10.583658       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:10:10.583735       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:15:10.584188       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:15:10.584289       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:15:10.584352       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:15:10.584448       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:15:10.584519       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:15:10.584743       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:15:10.584830       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:15:10.584915       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:15:10.584995       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:15:10.585051       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:15:10.585117       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:20:10.584921       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:20:10.585580       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:20:10.585664       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:20:10.585744       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	
	
	==> kube-controller-manager [dca7ec4a11ad9581ca73f7dc0a44569b981b375142761b6af7e7d3724cdbe386] <==
	I1219 03:05:29.137946       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-api-6c85dd6d79" duration="10.500982ms"
	I1219 03:05:29.138772       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-api-6c85dd6d79" duration="222.04µs"
	I1219 03:05:30.139560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-auth-96f55cbc9" duration="10.738008ms"
	I1219 03:05:30.141370       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-auth-96f55cbc9" duration="230.761µs"
	I1219 03:05:35.145518       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd" duration="124.735µs"
	I1219 03:05:36.153341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479" duration="7.771826ms"
	I1219 03:05:36.153487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479" duration="81.765µs"
	I1219 03:05:36.161499       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd" duration="142.354µs"
	I1219 03:05:44.124877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.783071ms"
	I1219 03:05:44.124969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.031µs"
	I1219 03:05:44.322554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd" duration="10.292955ms"
	I1219 03:05:44.322813       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd" duration="137.021µs"
	I1219 03:05:53.457987       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongplugins.configuration.konghq.com"
	I1219 03:05:53.458044       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongupstreampolicies.configuration.konghq.com"
	I1219 03:05:53.458064       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="tcpingresses.configuration.konghq.com"
	I1219 03:05:53.458080       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="ingressclassparameterses.configuration.konghq.com"
	I1219 03:05:53.458106       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongconsumers.configuration.konghq.com"
	I1219 03:05:53.458129       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongingresses.configuration.konghq.com"
	I1219 03:05:53.458159       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="udpingresses.configuration.konghq.com"
	I1219 03:05:53.458185       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongconsumergroups.configuration.konghq.com"
	I1219 03:05:53.458213       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongcustomentities.configuration.konghq.com"
	I1219 03:05:53.458314       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1219 03:05:53.658752       1 shared_informer.go:318] Caches are synced for resource quota
	I1219 03:05:53.873190       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1219 03:05:53.973659       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-proxy [9a529209e91c73dd4753533a0684e92d16e448ef5aed74f450f82867ec17304c] <==
	I1219 03:05:11.436432       1 server_others.go:69] "Using iptables proxy"
	I1219 03:05:11.452009       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1219 03:05:11.479225       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:05:11.482560       1 server_others.go:152] "Using iptables Proxier"
	I1219 03:05:11.482604       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1219 03:05:11.482625       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1219 03:05:11.482679       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1219 03:05:11.483072       1 server.go:846] "Version info" version="v1.28.0"
	I1219 03:05:11.483108       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:11.485106       1 config.go:97] "Starting endpoint slice config controller"
	I1219 03:05:11.485126       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1219 03:05:11.485951       1 config.go:315] "Starting node config controller"
	I1219 03:05:11.486004       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1219 03:05:11.485951       1 config.go:188] "Starting service config controller"
	I1219 03:05:11.486179       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1219 03:05:11.585764       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1219 03:05:11.587020       1 shared_informer.go:318] Caches are synced for node config
	I1219 03:05:11.587059       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [6764bc2ee8b6d353f059a22bdc0c53e92e10b6672c29a9f618bd8d9812741d3b] <==
	I1219 03:05:08.072216       1 serving.go:348] Generated self-signed cert in-memory
	W1219 03:05:10.585445       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:05:10.585508       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:05:10.585524       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:05:10.585535       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:05:10.628537       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1219 03:05:10.628629       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:10.631418       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1219 03:05:10.631571       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:10.633792       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1219 03:05:10.631594       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1219 03:05:10.734781       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062320     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c53e26af-d9fd-4efc-9354-3b3e505b50f1-tmp-volume\") pod \"kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2\" (UID: \"c53e26af-d9fd-4efc-9354-3b3e505b50f1\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062411     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrwqf\" (UniqueName: \"kubernetes.io/projected/583637fe-b99f-4b55-8173-e40ef125a4da-kube-api-access-lrwqf\") pod \"kubernetes-dashboard-auth-96f55cbc9-q6w55\" (UID: \"583637fe-b99f-4b55-8173-e40ef125a4da\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-96f55cbc9-q6w55"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062450     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-prefix-dir\" (UniqueName: \"kubernetes.io/empty-dir/b6d4ec1c-37c9-45af-84aa-2246d710edf0-kubernetes-dashboard-kong-prefix-dir\") pod \"kubernetes-dashboard-kong-f487b85cd-7vrxn\" (UID: \"b6d4ec1c-37c9-45af-84aa-2246d710edf0\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062475     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6d4ec1c-37c9-45af-84aa-2246d710edf0-kubernetes-dashboard-kong-tmp\") pod \"kubernetes-dashboard-kong-f487b85cd-7vrxn\" (UID: \"b6d4ec1c-37c9-45af-84aa-2246d710edf0\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062493     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/970184f3-748e-4083-93e1-27215e7d3544-tmp-volume\") pod \"kubernetes-dashboard-api-6c85dd6d79-gplb7\" (UID: \"970184f3-748e-4083-93e1-27215e7d3544\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-6c85dd6d79-gplb7"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062547     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/583637fe-b99f-4b55-8173-e40ef125a4da-tmp-volume\") pod \"kubernetes-dashboard-auth-96f55cbc9-q6w55\" (UID: \"583637fe-b99f-4b55-8173-e40ef125a4da\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-96f55cbc9-q6w55"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062611     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kong-custom-dbless-config-volume\" (UniqueName: \"kubernetes.io/configmap/b6d4ec1c-37c9-45af-84aa-2246d710edf0-kong-custom-dbless-config-volume\") pod \"kubernetes-dashboard-kong-f487b85cd-7vrxn\" (UID: \"b6d4ec1c-37c9-45af-84aa-2246d710edf0\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn"
	Dec 19 03:05:27 old-k8s-version-433330 kubelet[727]: I1219 03:05:27.257035     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:27 old-k8s-version-433330 kubelet[727]: I1219 03:05:27.257133     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:28 old-k8s-version-433330 kubelet[727]: I1219 03:05:28.110504     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-web-858bd7466-nt8k8" podStartSLOduration=2.1424808889999998 podCreationTimestamp="2025-12-19 03:05:23 +0000 UTC" firstStartedPulling="2025-12-19 03:05:24.288808133 +0000 UTC m=+17.406061880" lastFinishedPulling="2025-12-19 03:05:27.256749566 +0000 UTC m=+20.374003326" observedRunningTime="2025-12-19 03:05:28.109420313 +0000 UTC m=+21.226674073" watchObservedRunningTime="2025-12-19 03:05:28.110422335 +0000 UTC m=+21.227676096"
	Dec 19 03:05:28 old-k8s-version-433330 kubelet[727]: I1219 03:05:28.215638     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:28 old-k8s-version-433330 kubelet[727]: I1219 03:05:28.215739     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:29 old-k8s-version-433330 kubelet[727]: I1219 03:05:29.086317     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:29 old-k8s-version-433330 kubelet[727]: I1219 03:05:29.086398     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:30 old-k8s-version-433330 kubelet[727]: I1219 03:05:30.129411     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-api-6c85dd6d79-gplb7" podStartSLOduration=3.221513351 podCreationTimestamp="2025-12-19 03:05:23 +0000 UTC" firstStartedPulling="2025-12-19 03:05:24.307578658 +0000 UTC m=+17.424832408" lastFinishedPulling="2025-12-19 03:05:28.215417358 +0000 UTC m=+21.332671100" observedRunningTime="2025-12-19 03:05:29.130889061 +0000 UTC m=+22.248142823" watchObservedRunningTime="2025-12-19 03:05:30.129352043 +0000 UTC m=+23.246605805"
	Dec 19 03:05:30 old-k8s-version-433330 kubelet[727]: I1219 03:05:30.130193     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-auth-96f55cbc9-q6w55" podStartSLOduration=2.356310917 podCreationTimestamp="2025-12-19 03:05:23 +0000 UTC" firstStartedPulling="2025-12-19 03:05:24.31224463 +0000 UTC m=+17.429498372" lastFinishedPulling="2025-12-19 03:05:29.086067921 +0000 UTC m=+22.203321673" observedRunningTime="2025-12-19 03:05:30.128668409 +0000 UTC m=+23.245922169" watchObservedRunningTime="2025-12-19 03:05:30.130134218 +0000 UTC m=+23.247387978"
	Dec 19 03:05:35 old-k8s-version-433330 kubelet[727]: I1219 03:05:35.294232     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:35 old-k8s-version-433330 kubelet[727]: I1219 03:05:35.294310     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:36 old-k8s-version-433330 kubelet[727]: I1219 03:05:36.145317     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2" podStartSLOduration=2.170852672 podCreationTimestamp="2025-12-19 03:05:23 +0000 UTC" firstStartedPulling="2025-12-19 03:05:24.319586522 +0000 UTC m=+17.436840275" lastFinishedPulling="2025-12-19 03:05:35.293995871 +0000 UTC m=+28.411249625" observedRunningTime="2025-12-19 03:05:36.145033222 +0000 UTC m=+29.262286982" watchObservedRunningTime="2025-12-19 03:05:36.145262022 +0000 UTC m=+29.262515784"
	Dec 19 03:05:36 old-k8s-version-433330 kubelet[727]: I1219 03:05:36.161013     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn" podStartSLOduration=2.986982841 podCreationTimestamp="2025-12-19 03:05:23 +0000 UTC" firstStartedPulling="2025-12-19 03:05:24.31920326 +0000 UTC m=+17.436457054" lastFinishedPulling="2025-12-19 03:05:34.493165004 +0000 UTC m=+27.610418746" observedRunningTime="2025-12-19 03:05:36.16087964 +0000 UTC m=+29.278133404" watchObservedRunningTime="2025-12-19 03:05:36.160944533 +0000 UTC m=+29.278198294"
	Dec 19 03:05:42 old-k8s-version-433330 kubelet[727]: I1219 03:05:42.150477     727 scope.go:117] "RemoveContainer" containerID="4a2a86182d6e0257a9f9f666cddbee5bfd7ae7855be029a79247541fbec34a4d"
	Dec 19 03:23:51 old-k8s-version-433330 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 19 03:23:51 old-k8s-version-433330 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 19 03:23:51 old-k8s-version-433330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 19 03:23:51 old-k8s-version-433330 systemd[1]: kubelet.service: Consumed 21.277s CPU time.
	
	
	==> kubernetes-dashboard [162ae6553f9ecd77ec53fa67c114215b76b2baa657cb81e65ce4554f0273417d] <==
	I1219 03:05:27.332655       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:05:27.393367       1 init.go:48] Using in-cluster config
	I1219 03:05:27.393589       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [572a9a98a5b172d9fdd8fb35ee68c37988f906c342e70e89e9bc008440b9c471] <==
	I1219 03:05:28.320430       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:05:28.320512       1 init.go:49] Using in-cluster config
	I1219 03:05:28.320694       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:05:28.320747       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:05:28.320756       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:05:28.320762       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:05:28.327903       1 main.go:119] "Successful initial request to the apiserver" version="v1.28.0"
	I1219 03:05:28.327931       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:05:28.332767       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 03:05:28.336184       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	I1219 03:05:58.341672       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [9757437ad1c1d294af43014db84d4705543c3ebf727fe8d8aa2bcc64dab4ebe2] <==
	10.244.0.1 - - [19/Dec/2025:03:21:04 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:21:14 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:21:24 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:21:28 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:21:34 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:21:44 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:21:54 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:21:58 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:04 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:22:14 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:22:24 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:22:28 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:34 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:22:44 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:22:54 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:22:58 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:04 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:23:14 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:23:24 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:23:28 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:34 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:23:44 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	E1219 03:21:35.368770       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:22:35.366075       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:23:35.366592       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	
	
	==> kubernetes-dashboard [c787e566a13574b826852efdc66cad3575fec6831fdc3a03d85531ffb5a730c9] <==
	I1219 03:05:29.223480       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:05:29.223546       1 init.go:49] Using in-cluster config
	I1219 03:05:29.223660       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [4a2a86182d6e0257a9f9f666cddbee5bfd7ae7855be029a79247541fbec34a4d] <==
	I1219 03:05:11.393839       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:05:41.397217       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b58c35740f2bd58f8c06629e5842dc5bfb5070615f14eabac5641c2021fc5622] <==
	I1219 03:05:42.205301       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1219 03:05:42.214869       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1219 03:05:42.214917       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1219 03:05:59.616530       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1219 03:05:59.616620       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eca1d2cd-fec8-4561-9433-a93751f8f3f7", APIVersion:"v1", ResourceVersion:"774", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-433330_fa64f0b5-7eb2-42bf-8fc2-3317af087ee3 became leader
	I1219 03:05:59.616726       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-433330_fa64f0b5-7eb2-42bf-8fc2-3317af087ee3!
	I1219 03:05:59.716964       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-433330_fa64f0b5-7eb2-42bf-8fc2-3317af087ee3!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-433330 -n old-k8s-version-433330
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-433330 -n old-k8s-version-433330: exit status 2 (336.878055ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-433330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-433330
helpers_test.go:244: (dbg) docker inspect old-k8s-version-433330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18",
	        "Created": "2025-12-19T03:03:42.290394762Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 338430,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:05:00.142567023Z",
	            "FinishedAt": "2025-12-19T03:04:59.042546116Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18/hosts",
	        "LogPath": "/var/lib/docker/containers/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18/ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18-json.log",
	        "Name": "/old-k8s-version-433330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-433330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-433330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed00f1899233b91ac778b19a643e21a9dc25721c167c9d1bc5decc178917ee18",
	                "LowerDir": "/var/lib/docker/overlay2/6032c39fb2e99c6be5b0226eff9b2f93ff205fafe5352aa25ebce819f3079e50-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6032c39fb2e99c6be5b0226eff9b2f93ff205fafe5352aa25ebce819f3079e50/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6032c39fb2e99c6be5b0226eff9b2f93ff205fafe5352aa25ebce819f3079e50/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6032c39fb2e99c6be5b0226eff9b2f93ff205fafe5352aa25ebce819f3079e50/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-433330",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-433330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-433330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-433330",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-433330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "dccc35fac12f6f9c606670826d973be968de80e11b47147853405d102ecda025",
	            "SandboxKey": "/var/run/docker/netns/dccc35fac12f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-433330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ebf807015d65c8db1230e3a313a61194a5685b902dee458d727805bc340fe33d",
	                    "EndpointID": "a6443b6616b36367152fe2b3630db96df1ad95a1774c32a4f279e3a106c8f1e6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "8e:3f:cd:fb:94:67",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-433330",
	                        "ed00f1899233"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-433330 -n old-k8s-version-433330
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-433330 -n old-k8s-version-433330: exit status 2 (334.544344ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-433330 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-433330 logs -n 25: (1.277277225s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-821749 sudo crio config                                                                                                                                                                                                     │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ delete  │ -p custom-flannel-821749                                                                                                                                                                                                                      │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-433330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-278042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ delete  │ -p disable-driver-mounts-507648                                                                                                                                                                                                               │ disable-driver-mounts-507648 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ stop    │ -p old-k8s-version-433330 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-278042 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-433330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-433330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p no-preload-278042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-278042 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p embed-certs-805185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p embed-certs-805185 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-717222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-717222 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-805185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-717222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ image   │ old-k8s-version-433330 image list --format=json                                                                                                                                                                                               │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p old-k8s-version-433330 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ image   │ no-preload-278042 image list --format=json                                                                                                                                                                                                    │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p no-preload-278042 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:05:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:05:53.092301  352121 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:05:53.092394  352121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:53.092398  352121 out.go:374] Setting ErrFile to fd 2...
	I1219 03:05:53.092402  352121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:53.092674  352121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:05:53.093206  352121 out.go:368] Setting JSON to false
	I1219 03:05:53.094527  352121 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2904,"bootTime":1766110649,"procs":396,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:05:53.094626  352121 start.go:143] virtualization: kvm guest
	I1219 03:05:53.096521  352121 out.go:179] * [default-k8s-diff-port-717222] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:05:53.097989  352121 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:05:53.098033  352121 notify.go:221] Checking for updates...
	I1219 03:05:53.100179  352121 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:05:53.101370  352121 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:53.102535  352121 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:05:53.104239  352121 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:05:53.105473  352121 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:05:53.107110  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:53.107760  352121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:05:53.140217  352121 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:05:53.140357  352121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:53.211137  352121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 03:05:53.198937136 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:53.211231  352121 docker.go:319] overlay module found
	I1219 03:05:53.212919  352121 out.go:179] * Using the docker driver based on existing profile
	I1219 03:05:53.214070  352121 start.go:309] selected driver: docker
	I1219 03:05:53.214085  352121 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:53.214190  352121 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:05:53.214819  352121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:53.291099  352121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 03:05:53.277402722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:53.291489  352121 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:53.291541  352121 cni.go:84] Creating CNI manager for ""
	I1219 03:05:53.291616  352121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:53.291670  352121 start.go:353] cluster config:
	{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:53.293313  352121 out.go:179] * Starting "default-k8s-diff-port-717222" primary control-plane node in "default-k8s-diff-port-717222" cluster
	I1219 03:05:53.300833  352121 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:05:53.301994  352121 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:05:53.303248  352121 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:05:53.303295  352121 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 03:05:53.303311  352121 cache.go:65] Caching tarball of preloaded images
	I1219 03:05:53.303351  352121 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:05:53.303428  352121 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:05:53.303442  352121 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 03:05:53.303571  352121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json ...
	I1219 03:05:53.330621  352121 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:05:53.330652  352121 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:05:53.330671  352121 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:05:53.330767  352121 start.go:360] acquireMachinesLock for default-k8s-diff-port-717222: {Name:mk586c44d14a94c58ce6de7426a02f7319eb0716 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:53.330860  352121 start.go:364] duration metric: took 49.392µs to acquireMachinesLock for "default-k8s-diff-port-717222"
	I1219 03:05:53.330886  352121 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:05:53.330893  352121 fix.go:54] fixHost starting: 
	I1219 03:05:53.331229  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:53.353305  352121 fix.go:112] recreateIfNeeded on default-k8s-diff-port-717222: state=Stopped err=<nil>
	W1219 03:05:53.353340  352121 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:05:52.784975  350034 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:52.785039  350034 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:05:52.785129  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.788781  350034 addons.go:239] Setting addon default-storageclass=true in "embed-certs-805185"
	W1219 03:05:52.788860  350034 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:05:52.788932  350034 host.go:66] Checking if "embed-certs-805185" exists ...
	I1219 03:05:52.789603  350034 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:52.803693  350034 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:52.803884  350034 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:05:52.803963  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.829487  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.833372  350034 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:52.833419  350034 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:05:52.833502  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.838747  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.863871  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.921723  350034 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:52.937553  350034 node_ready.go:35] waiting up to 6m0s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:52.960476  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:52.967329  350034 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:05:52.987994  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:54.352944  350034 node_ready.go:49] node "embed-certs-805185" is "Ready"
	I1219 03:05:54.352990  350034 node_ready.go:38] duration metric: took 1.41538432s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:54.353006  350034 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:54.353065  350034 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:05:54.864404  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.903892508s)
	I1219 03:05:54.864464  350034 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.897096827s)
	I1219 03:05:54.864523  350034 api_server.go:72] duration metric: took 2.115189515s to wait for apiserver process to appear ...
	I1219 03:05:54.864541  350034 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:05:54.864542  350034 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:54.864474  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.876456755s)
	I1219 03:05:54.864569  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:54.868475  350034 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:05:54.869335  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:54.869359  350034 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:55.365237  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:55.371297  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:55.371332  350034 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:53.354901  352121 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-717222" ...
	I1219 03:05:53.354983  352121 cli_runner.go:164] Run: docker start default-k8s-diff-port-717222
	I1219 03:05:53.699823  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:53.723885  352121 kic.go:430] container "default-k8s-diff-port-717222" state is running.
	I1219 03:05:53.724437  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:53.748911  352121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json ...
	I1219 03:05:53.749250  352121 machine.go:94] provisionDockerMachine start ...
	I1219 03:05:53.749328  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:53.779816  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:53.780159  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:53.780174  352121 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:05:53.780830  352121 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1219 03:05:56.928761  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-717222
	
	I1219 03:05:56.928788  352121 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-717222"
	I1219 03:05:56.928865  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:56.949113  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:56.949333  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:56.949347  352121 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-717222 && echo "default-k8s-diff-port-717222" | sudo tee /etc/hostname
	I1219 03:05:57.104743  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-717222
	
	I1219 03:05:57.104815  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.123728  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:57.124049  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:57.124072  352121 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-717222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-717222/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-717222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:05:57.273322  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:05:57.273360  352121 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 03:05:57.273404  352121 ubuntu.go:190] setting up certificates
	I1219 03:05:57.273418  352121 provision.go:84] configureAuth start
	I1219 03:05:57.273481  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:57.295306  352121 provision.go:143] copyHostCerts
	I1219 03:05:57.295369  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem, removing ...
	I1219 03:05:57.295386  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem
	I1219 03:05:57.295456  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 03:05:57.295579  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem, removing ...
	I1219 03:05:57.295589  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem
	I1219 03:05:57.295628  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 03:05:57.295782  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem, removing ...
	I1219 03:05:57.295804  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem
	I1219 03:05:57.295852  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 03:05:57.295936  352121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-717222 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-717222 localhost minikube]
	I1219 03:05:57.394982  352121 provision.go:177] copyRemoteCerts
	I1219 03:05:57.395055  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:05:57.395092  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.415801  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:57.518597  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:05:57.537101  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:05:57.555959  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:05:57.575813  352121 provision.go:87] duration metric: took 302.381293ms to configureAuth
	I1219 03:05:57.575844  352121 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:05:57.576048  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:57.576149  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.595953  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:57.596182  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:57.596200  352121 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:05:58.066496  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:05:58.066522  352121 machine.go:97] duration metric: took 4.317251418s to provisionDockerMachine
	I1219 03:05:58.066537  352121 start.go:293] postStartSetup for "default-k8s-diff-port-717222" (driver="docker")
	I1219 03:05:58.066550  352121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:05:58.066636  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:05:58.066688  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.089735  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:55.753357  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:05:55.865655  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:55.871156  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1219 03:05:55.872281  350034 api_server.go:141] control plane version: v1.34.3
	I1219 03:05:55.872314  350034 api_server.go:131] duration metric: took 1.007762314s to wait for apiserver health ...
	I1219 03:05:55.872322  350034 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:55.876404  350034 system_pods.go:59] 8 kube-system pods found
	I1219 03:05:55.876440  350034 system_pods.go:61] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:55.876448  350034 system_pods.go:61] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:55.876455  350034 system_pods.go:61] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:55.876464  350034 system_pods.go:61] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:55.876469  350034 system_pods.go:61] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:55.876474  350034 system_pods.go:61] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:55.876484  350034 system_pods.go:61] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:55.876491  350034 system_pods.go:61] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:55.876501  350034 system_pods.go:74] duration metric: took 4.173475ms to wait for pod list to return data ...
	I1219 03:05:55.876508  350034 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:55.879480  350034 default_sa.go:45] found service account: "default"
	I1219 03:05:55.879505  350034 default_sa.go:55] duration metric: took 2.991473ms for default service account to be created ...
	I1219 03:05:55.879514  350034 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:55.882762  350034 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:55.882799  350034 system_pods.go:89] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:55.882811  350034 system_pods.go:89] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:55.882822  350034 system_pods.go:89] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:55.882831  350034 system_pods.go:89] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:55.882844  350034 system_pods.go:89] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:55.882860  350034 system_pods.go:89] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:55.882873  350034 system_pods.go:89] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:55.882885  350034 system_pods.go:89] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:55.882898  350034 system_pods.go:126] duration metric: took 3.377076ms to wait for k8s-apps to be running ...
	I1219 03:05:55.882911  350034 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:55.882965  350034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:58.664447  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (2.910960851s)
	I1219 03:05:58.664545  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:58.664639  350034 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.781656633s)
	I1219 03:05:58.664656  350034 system_svc.go:56] duration metric: took 2.781742979s WaitForService to wait for kubelet
	I1219 03:05:58.664665  350034 kubeadm.go:587] duration metric: took 5.915331625s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:58.664687  350034 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:58.675538  350034 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:58.675644  350034 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:58.675679  350034 node_conditions.go:105] duration metric: took 10.98619ms to run NodePressure ...
	I1219 03:05:58.675732  350034 start.go:242] waiting for startup goroutines ...
	I1219 03:05:58.853716  350034 addons.go:500] Verifying addon dashboard=true in "embed-certs-805185"
	I1219 03:05:58.854075  350034 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:58.881488  350034 out.go:179] * Verifying dashboard addon...
	I1219 03:05:58.197271  352121 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:05:58.201133  352121 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:05:58.201178  352121 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:05:58.201190  352121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 03:05:58.201247  352121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 03:05:58.201317  352121 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem -> 85362.pem in /etc/ssl/certs
	I1219 03:05:58.201402  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:05:58.208949  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:05:58.226900  352121 start.go:296] duration metric: took 160.349121ms for postStartSetup
	I1219 03:05:58.227004  352121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:05:58.227051  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.247047  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.350362  352121 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:05:58.356036  352121 fix.go:56] duration metric: took 5.025136862s for fixHost
	I1219 03:05:58.356065  352121 start.go:83] releasing machines lock for "default-k8s-diff-port-717222", held for 5.025188602s
	I1219 03:05:58.356141  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:58.375473  352121 ssh_runner.go:195] Run: cat /version.json
	I1219 03:05:58.375532  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.375568  352121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:05:58.375641  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.397142  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.397516  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.556811  352121 ssh_runner.go:195] Run: systemctl --version
	I1219 03:05:58.565677  352121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:05:58.614666  352121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:05:58.621187  352121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:05:58.621270  352121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:05:58.633014  352121 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:05:58.633047  352121 start.go:496] detecting cgroup driver to use...
	I1219 03:05:58.633088  352121 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:05:58.633137  352121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:05:58.657800  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:05:58.678804  352121 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:05:58.678883  352121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:05:58.705625  352121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:05:58.728926  352121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:05:58.831932  352121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:05:58.929486  352121 docker.go:234] disabling docker service ...
	I1219 03:05:58.929541  352121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:05:58.944770  352121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:05:58.958306  352121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:05:59.062259  352121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:05:59.150570  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:05:59.163650  352121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:05:59.178297  352121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:05:59.178363  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.187297  352121 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 03:05:59.187364  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.198433  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.208772  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.217632  352121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:05:59.226347  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.236018  352121 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.245884  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.255295  352121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:05:59.263398  352121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:05:59.271671  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:59.372273  352121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:05:59.546574  352121 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:05:59.546667  352121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:05:59.552176  352121 start.go:564] Will wait 60s for crictl version
	I1219 03:05:59.552257  352121 ssh_runner.go:195] Run: which crictl
	I1219 03:05:59.557121  352121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:05:59.591016  352121 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 03:05:59.591106  352121 ssh_runner.go:195] Run: crio --version
	I1219 03:05:59.628329  352121 ssh_runner.go:195] Run: crio --version
	I1219 03:05:59.666833  352121 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1219 03:05:58.884349  350034 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:05:58.887370  350034 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:05:58.887396  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.389654  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.888847  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.388430  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.668307  352121 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-717222 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:05:59.692145  352121 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1219 03:05:59.697760  352121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:59.712259  352121 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:05:59.712414  352121 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:05:59.712469  352121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:59.758952  352121 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:59.758977  352121 crio.go:433] Images already preloaded, skipping extraction
	I1219 03:05:59.759041  352121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:59.795008  352121 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:59.795037  352121 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:05:59.795046  352121 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.3 crio true true} ...
	I1219 03:05:59.795178  352121 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-717222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:05:59.795259  352121 ssh_runner.go:195] Run: crio config
	I1219 03:05:59.864102  352121 cni.go:84] Creating CNI manager for ""
	I1219 03:05:59.864128  352121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:59.864150  352121 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:05:59.864179  352121 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-717222 NodeName:default-k8s-diff-port-717222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:05:59.864362  352121 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-717222"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:05:59.864462  352121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:05:59.875851  352121 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:05:59.875922  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:05:59.884464  352121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1219 03:05:59.899134  352121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:05:59.915215  352121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1219 03:05:59.929131  352121 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:05:59.933193  352121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:59.944310  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:06:00.032535  352121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:06:00.050232  352121 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222 for IP: 192.168.94.2
	I1219 03:06:00.050255  352121 certs.go:195] generating shared ca certs ...
	I1219 03:06:00.050283  352121 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.050682  352121 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 03:06:00.050862  352121 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 03:06:00.050911  352121 certs.go:257] generating profile certs ...
	I1219 03:06:00.051065  352121 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/client.key
	I1219 03:06:00.051146  352121 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.key.a5493f25
	I1219 03:06:00.051208  352121 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.key
	I1219 03:06:00.051349  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem (1338 bytes)
	W1219 03:06:00.051384  352121 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536_empty.pem, impossibly tiny 0 bytes
	I1219 03:06:00.051393  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:06:00.051430  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:06:00.051457  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:06:00.051489  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 03:06:00.051550  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:06:00.053211  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:06:00.075839  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:06:00.100173  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:06:00.124618  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 03:06:00.149956  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:06:00.170353  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:06:00.189275  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:06:00.209284  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 03:06:00.228628  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:06:00.250172  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem --> /usr/share/ca-certificates/8536.pem (1338 bytes)
	I1219 03:06:00.275836  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /usr/share/ca-certificates/85362.pem (1708 bytes)
	I1219 03:06:00.299659  352121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:06:00.316102  352121 ssh_runner.go:195] Run: openssl version
	I1219 03:06:00.322905  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.332781  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:06:00.343197  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.348016  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.348075  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.393419  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:06:00.402299  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.411729  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8536.pem /etc/ssl/certs/8536.pem
	I1219 03:06:00.420351  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.424769  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:33 /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.424832  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.477146  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:06:00.485938  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.495504  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85362.pem /etc/ssl/certs/85362.pem
	I1219 03:06:00.504981  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.509807  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:33 /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.509870  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.553357  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:06:00.563012  352121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:06:00.568396  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:06:00.622821  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:06:00.682127  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:06:00.745366  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:06:00.806418  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:06:00.854164  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:06:00.896520  352121 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:06:00.896640  352121 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:06:00.896757  352121 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:06:00.932488  352121 cri.go:92] found id: "1340a2f59347d94cb67897c14b11276bb491943caa3b94ed7080db25e5d5dc78"
	I1219 03:06:00.932513  352121 cri.go:92] found id: "725faee3812c5d3136b764571b2c9b270517680beb590d21de31aad0b5d0b89a"
	I1219 03:06:00.932519  352121 cri.go:92] found id: "d2c496c53c69616e93f161ca59c69ebaa3d4de81e94460893270a29e321886eb"
	I1219 03:06:00.932523  352121 cri.go:92] found id: "0fb4e8910a64fc9fe34f2319ccb58566695768a8c137e01b391e3004481cb992"
	I1219 03:06:00.932538  352121 cri.go:92] found id: ""
	I1219 03:06:00.932585  352121 ssh_runner.go:195] Run: sudo runc list -f json
	W1219 03:06:00.949831  352121 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:06:00Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:06:00.949907  352121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:06:00.960453  352121 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:06:00.960473  352121 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:06:00.960520  352121 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:06:00.969747  352121 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:06:00.971295  352121 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-717222" does not appear in /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:06:00.972255  352121 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-4987/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-717222" cluster setting kubeconfig missing "default-k8s-diff-port-717222" context setting]
	I1219 03:06:00.973245  352121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.975092  352121 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:06:00.986333  352121 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1219 03:06:00.986370  352121 kubeadm.go:602] duration metric: took 25.891084ms to restartPrimaryControlPlane
	I1219 03:06:00.986381  352121 kubeadm.go:403] duration metric: took 89.870495ms to StartCluster
	I1219 03:06:00.986399  352121 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.986465  352121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:06:00.988984  352121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.989885  352121 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:06:00.990020  352121 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:06:00.990126  352121 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990162  352121 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-717222"
	W1219 03:06:00.990171  352121 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:06:00.990156  352121 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990202  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:00.990217  352121 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-717222"
	W1219 03:06:00.990229  352121 addons.go:248] addon dashboard should already be in state true
	I1219 03:06:00.990249  352121 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990136  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:06:00.990260  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:00.990288  352121 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-717222"
	I1219 03:06:00.990658  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.990728  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.990961  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.993354  352121 out.go:179] * Verifying Kubernetes components...
	I1219 03:06:00.994888  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:06:01.021617  352121 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:06:01.021640  352121 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:06:01.021711  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.025911  352121 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:06:01.027271  352121 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:06:01.027318  352121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:06:01.027479  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.027961  352121 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-717222"
	W1219 03:06:01.027983  352121 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:06:01.028009  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:01.028455  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:01.058853  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.066205  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.072839  352121 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:06:01.072864  352121 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:06:01.072921  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.100032  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.171523  352121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:06:01.187709  352121 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:06:01.188379  352121 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:06:01.193383  352121 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:06:01.197836  352121 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:06:01.203342  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:06:01.235359  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:06:02.386268  352121 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.188390942s)
	I1219 03:06:02.386368  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:06:03.037231  352121 node_ready.go:49] node "default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:03.037269  352121 node_ready.go:38] duration metric: took 1.848860443s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:06:03.037285  352121 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:06:03.037340  352121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:06:00.888463  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.390478  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.889172  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.392135  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.887803  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.395356  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.887597  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.388564  350034 kapi.go:107] duration metric: took 5.504214625s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:06:04.390500  350034 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-805185 addons enable metrics-server
	
	I1219 03:06:04.392729  350034 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1219 03:06:04.394312  350034 addons.go:546] duration metric: took 11.644922855s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:06:04.394358  350034 start.go:247] waiting for cluster config update ...
	I1219 03:06:04.394374  350034 start.go:256] writing updated cluster config ...
	I1219 03:06:04.394679  350034 ssh_runner.go:195] Run: rm -f paused
	I1219 03:06:04.399689  350034 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:04.404337  350034 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:03.829353  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.625958762s)
	I1219 03:06:03.829435  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.59404762s)
	I1219 03:06:06.147413  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.761002493s)
	I1219 03:06:06.147503  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:06:06.147635  352121 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.110275887s)
	I1219 03:06:06.147788  352121 api_server.go:72] duration metric: took 5.157862117s to wait for apiserver process to appear ...
	I1219 03:06:06.147803  352121 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:06:06.147822  352121 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:06:06.158489  352121 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1219 03:06:06.160663  352121 api_server.go:141] control plane version: v1.34.3
	I1219 03:06:06.160691  352121 api_server.go:131] duration metric: took 12.881794ms to wait for apiserver health ...
	I1219 03:06:06.160726  352121 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:06:06.167594  352121 system_pods.go:59] 8 kube-system pods found
	I1219 03:06:06.167645  352121 system_pods.go:61] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:06:06.167662  352121 system_pods.go:61] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:06:06.167671  352121 system_pods.go:61] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:06:06.167680  352121 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:06:06.167689  352121 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:06:06.167695  352121 system_pods.go:61] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:06:06.167733  352121 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:06:06.167740  352121 system_pods.go:61] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:06:06.167750  352121 system_pods.go:74] duration metric: took 7.015044ms to wait for pod list to return data ...
	I1219 03:06:06.167763  352121 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:06:06.172355  352121 default_sa.go:45] found service account: "default"
	I1219 03:06:06.172383  352121 default_sa.go:55] duration metric: took 4.612941ms for default service account to be created ...
	I1219 03:06:06.172394  352121 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:06:06.266672  352121 system_pods.go:86] 8 kube-system pods found
	I1219 03:06:06.266726  352121 system_pods.go:89] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:06:06.266739  352121 system_pods.go:89] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:06:06.266748  352121 system_pods.go:89] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:06:06.266766  352121 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:06:06.266780  352121 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:06:06.266794  352121 system_pods.go:89] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:06:06.266807  352121 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:06:06.266814  352121 system_pods.go:89] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:06:06.266828  352121 system_pods.go:126] duration metric: took 94.426368ms to wait for k8s-apps to be running ...
	I1219 03:06:06.266837  352121 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:06:06.266897  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:06:06.349488  352121 system_svc.go:56] duration metric: took 82.641061ms WaitForService to wait for kubelet
	I1219 03:06:06.349525  352121 kubeadm.go:587] duration metric: took 5.359602561s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:06:06.349549  352121 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:06:06.350103  352121 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-717222"
	I1219 03:06:06.350491  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:06.365789  352121 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:06:06.365826  352121 node_conditions.go:123] node cpu capacity is 8
	I1219 03:06:06.365844  352121 node_conditions.go:105] duration metric: took 16.288389ms to run NodePressure ...
	I1219 03:06:06.365861  352121 start.go:242] waiting for startup goroutines ...
	I1219 03:06:06.386126  352121 out.go:179] * Verifying dashboard addon...
	I1219 03:06:06.389114  352121 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:06:06.393439  352121 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:06:07.395667  352121 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:06:07.395691  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.895613  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:06:06.411012  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:08.413428  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:08.393607  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.894323  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.395092  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.892933  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.393259  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.894360  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.394632  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.893492  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.393217  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.892698  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.423636  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.893633  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.392860  352121 kapi.go:107] duration metric: took 8.003743298s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:06:14.394678  352121 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-717222 addons enable metrics-server
	
	I1219 03:06:14.396056  352121 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1219 03:06:10.911668  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:13.410917  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:15.411844  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:14.397218  352121 addons.go:546] duration metric: took 13.407198611s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:06:14.397266  352121 start.go:247] waiting for cluster config update ...
	I1219 03:06:14.397281  352121 start.go:256] writing updated cluster config ...
	I1219 03:06:14.397606  352121 ssh_runner.go:195] Run: rm -f paused
	I1219 03:06:14.402992  352121 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:14.409994  352121 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	W1219 03:06:16.416124  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:17.411983  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:19.909949  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:18.416198  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:20.916272  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:21.910435  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:24.410080  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:23.415501  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:25.416077  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:27.916104  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:26.910790  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:29.410901  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:30.410731  350034 pod_ready.go:94] pod "coredns-66bc5c9577-8gphx" is "Ready"
	I1219 03:06:30.410767  350034 pod_ready.go:86] duration metric: took 26.00640693s for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.413681  350034 pod_ready.go:83] waiting for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.418650  350034 pod_ready.go:94] pod "etcd-embed-certs-805185" is "Ready"
	I1219 03:06:30.418674  350034 pod_ready.go:86] duration metric: took 4.958889ms for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.420801  350034 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.424880  350034 pod_ready.go:94] pod "kube-apiserver-embed-certs-805185" is "Ready"
	I1219 03:06:30.424905  350034 pod_ready.go:86] duration metric: took 4.082079ms for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.427435  350034 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.608611  350034 pod_ready.go:94] pod "kube-controller-manager-embed-certs-805185" is "Ready"
	I1219 03:06:30.608639  350034 pod_ready.go:86] duration metric: took 181.178673ms for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.808837  350034 pod_ready.go:83] waiting for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.208761  350034 pod_ready.go:94] pod "kube-proxy-p8pqg" is "Ready"
	I1219 03:06:31.208794  350034 pod_ready.go:86] duration metric: took 399.926497ms for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.408111  350034 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.808631  350034 pod_ready.go:94] pod "kube-scheduler-embed-certs-805185" is "Ready"
	I1219 03:06:31.808661  350034 pod_ready.go:86] duration metric: took 400.520634ms for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.808676  350034 pod_ready.go:40] duration metric: took 27.408945135s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:31.854031  350034 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:06:31.855666  350034 out.go:179] * Done! kubectl is now configured to use "embed-certs-805185" cluster and "default" namespace by default
	W1219 03:06:29.916349  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:31.916589  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:34.416790  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:36.915948  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	I1219 03:06:37.416133  352121 pod_ready.go:94] pod "coredns-66bc5c9577-dskxl" is "Ready"
	I1219 03:06:37.416166  352121 pod_ready.go:86] duration metric: took 23.006142155s for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.418804  352121 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.423037  352121 pod_ready.go:94] pod "etcd-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.423066  352121 pod_ready.go:86] duration metric: took 4.235839ms for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.425227  352121 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.429099  352121 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.429133  352121 pod_ready.go:86] duration metric: took 3.885497ms for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.430894  352121 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.614208  352121 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.614234  352121 pod_ready.go:86] duration metric: took 183.319695ms for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.814884  352121 pod_ready.go:83] waiting for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.214236  352121 pod_ready.go:94] pod "kube-proxy-mr7c8" is "Ready"
	I1219 03:06:38.214264  352121 pod_ready.go:86] duration metric: took 399.351737ms for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.413883  352121 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.813953  352121 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:38.813980  352121 pod_ready.go:86] duration metric: took 400.070957ms for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.813992  352121 pod_ready.go:40] duration metric: took 24.410965975s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:38.857548  352121 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:06:38.860155  352121 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-717222" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.294882463Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2" id=b149fd9d-fd72-4e11-adb2-25e489e6bf82 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.296980103Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2/kubernetes-dashboard-metrics-scraper" id=19f10fdd-5916-41c8-a26a-2ff94d29c5dc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.297143775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.301522987Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.302174856Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.312945079Z" level=info msg="Created container 1a79f7aa9ddca8538f5c2e6b027ef80376da81a39357bcf46549ad8c98b71cf4: kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn/proxy" id=8f1aef5d-9910-4677-95e2-3ddd26dbad0a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.31363451Z" level=info msg="Starting container: 1a79f7aa9ddca8538f5c2e6b027ef80376da81a39357bcf46549ad8c98b71cf4" id=8be5d998-255e-4a4b-8020-90d42f9fb352 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.316425544Z" level=info msg="Started container" PID=1962 containerID=1a79f7aa9ddca8538f5c2e6b027ef80376da81a39357bcf46549ad8c98b71cf4 description=kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn/proxy id=8be5d998-255e-4a4b-8020-90d42f9fb352 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5b4016d036c099501205c1263d738aec355ca9ba0985ac0de1a6326f1ba60f4f
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.32575797Z" level=info msg="Created container 9757437ad1c1d294af43014db84d4705543c3ebf727fe8d8aa2bcc64dab4ebe2: kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2/kubernetes-dashboard-metrics-scraper" id=19f10fdd-5916-41c8-a26a-2ff94d29c5dc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.326784172Z" level=info msg="Starting container: 9757437ad1c1d294af43014db84d4705543c3ebf727fe8d8aa2bcc64dab4ebe2" id=c58dd769-fa87-40be-a2b3-85b8605bc579 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:05:35 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:35.329520518Z" level=info msg="Started container" PID=1967 containerID=9757437ad1c1d294af43014db84d4705543c3ebf727fe8d8aa2bcc64dab4ebe2 description=kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2/kubernetes-dashboard-metrics-scraper id=c58dd769-fa87-40be-a2b3-85b8605bc579 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b5b7f0901c4eba07cb72103c3ef6c2da1dd3e8c1ae0cbe501ab5646ede4e16ae
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.151028864Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cbd04026-4973-4fb2-a2f5-e1a0bcef1d04 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.152401329Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5d524859-0cd0-482d-8890-c3a0b5bfcadf name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.153497878Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=476303b0-0df5-44b1-9467-56549e89f9fa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.153634163Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.15821817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.158364577Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/fc5e20521cad9e4c78b487769b301845071ca906b069eb3bccc1e9a717925eaf/merged/etc/passwd: no such file or directory"
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.1583869Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fc5e20521cad9e4c78b487769b301845071ca906b069eb3bccc1e9a717925eaf/merged/etc/group: no such file or directory"
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.158596016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.189477862Z" level=info msg="Created container b58c35740f2bd58f8c06629e5842dc5bfb5070615f14eabac5641c2021fc5622: kube-system/storage-provisioner/storage-provisioner" id=476303b0-0df5-44b1-9467-56549e89f9fa name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.190263305Z" level=info msg="Starting container: b58c35740f2bd58f8c06629e5842dc5bfb5070615f14eabac5641c2021fc5622" id=094675cc-8230-4bbb-9611-af75aa9fbdb9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:05:42 old-k8s-version-433330 crio[565]: time="2025-12-19T03:05:42.192298533Z" level=info msg="Started container" PID=3386 containerID=b58c35740f2bd58f8c06629e5842dc5bfb5070615f14eabac5641c2021fc5622 description=kube-system/storage-provisioner/storage-provisioner id=094675cc-8230-4bbb-9611-af75aa9fbdb9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0546164e8f444b2265480d306eeac5a7944c866d22f7a7daa5d4a8a97d59bd1
	Dec 19 03:10:06 old-k8s-version-433330 crio[565]: time="2025-12-19T03:10:06.979473429Z" level=info msg="Checking image status: registry.k8s.io/pause:3.9" id=88b602ee-9bb9-4765-ba4b-8f37a46dfeb9 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:15:06 old-k8s-version-433330 crio[565]: time="2025-12-19T03:15:06.983672919Z" level=info msg="Checking image status: registry.k8s.io/pause:3.9" id=3ec44638-8e03-4c18-8174-4cc031367aa5 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:20:06 old-k8s-version-433330 crio[565]: time="2025-12-19T03:20:06.987998685Z" level=info msg="Checking image status: registry.k8s.io/pause:3.9" id=691148af-39d8-427d-99b8-393bcb276786 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	b58c35740f2bd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Running             storage-provisioner                    1                   c0546164e8f44       storage-provisioner                                     kube-system
	9757437ad1c1d       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   18 minutes ago      Running             kubernetes-dashboard-metrics-scraper   0                   b5b7f0901c4eb       kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2   kubernetes-dashboard
	1a79f7aa9ddca       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                           18 minutes ago      Running             proxy                                  0                   5b4016d036c09       kubernetes-dashboard-kong-f487b85cd-7vrxn               kubernetes-dashboard
	43a7239d34381       docker.io/library/kong@sha256:73ac10ce4d2c5b3b8b4acd6c8117b4e72d1a201d95be2d51aeae8324d776a108                             18 minutes ago      Exited              clear-stale-pid                        0                   5b4016d036c09       kubernetes-dashboard-kong-f487b85cd-7vrxn               kubernetes-dashboard
	c787e566a1357       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              18 minutes ago      Running             kubernetes-dashboard-auth              0                   1b21bd00ecbe5       kubernetes-dashboard-auth-96f55cbc9-q6w55               kubernetes-dashboard
	572a9a98a5b17       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               18 minutes ago      Running             kubernetes-dashboard-api               0                   2598675df2023       kubernetes-dashboard-api-6c85dd6d79-gplb7               kubernetes-dashboard
	162ae6553f9ec       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               18 minutes ago      Running             kubernetes-dashboard-web               0                   1832855b57889       kubernetes-dashboard-web-858bd7466-nt8k8                kubernetes-dashboard
	8040658b9f3ea       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                           18 minutes ago      Running             coredns                                0                   c68d596bc4c32       coredns-5dd5756b68-vp79f                                kube-system
	e0cd612dc1ee9       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                                           18 minutes ago      Running             busybox                                1                   a960ed231cfff       busybox                                                 default
	9243551aa2fc1       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                                           18 minutes ago      Running             kindnet-cni                            0                   83c7dbba43d07       kindnet-hm2sz                                           kube-system
	9a529209e91c7       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                                           18 minutes ago      Running             kube-proxy                             0                   2bfa6386c24f2       kube-proxy-wdrk8                                        kube-system
	4a2a86182d6e0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Exited              storage-provisioner                    0                   c0546164e8f44       storage-provisioner                                     kube-system
	ba54120ef227f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                           18 minutes ago      Running             etcd                                   0                   e4fbd268e41d9       etcd-old-k8s-version-433330                             kube-system
	dca7ec4a11ad9       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                                           18 minutes ago      Running             kube-controller-manager                0                   2ebbf830bac83       kube-controller-manager-old-k8s-version-433330          kube-system
	6764bc2ee8b6d       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                                           18 minutes ago      Running             kube-scheduler                         0                   b8ce7eb1e0991       kube-scheduler-old-k8s-version-433330                   kube-system
	e80d5d62bfdcc       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                                           18 minutes ago      Running             kube-apiserver                         0                   5a193f007e64f       kube-apiserver-old-k8s-version-433330                   kube-system
	
	
	==> coredns [8040658b9f3eabbbb5ba47f09aef99df774aaa0bfb4e998b1fb75e4a9d80669f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41940 - 34117 "HINFO IN 2692397503380385834.233192437307976356. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.044493269s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-433330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-433330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=old-k8s-version-433330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_04_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:03:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-433330
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:23:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:21:00 +0000   Fri, 19 Dec 2025 03:03:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:21:00 +0000   Fri, 19 Dec 2025 03:03:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:21:00 +0000   Fri, 19 Dec 2025 03:03:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:21:00 +0000   Fri, 19 Dec 2025 03:04:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-433330
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                51a7519b-85cf-4ec7-8319-8a51b3632490
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-5dd5756b68-vp79f                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-old-k8s-version-433330                              100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-hm2sz                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-old-k8s-version-433330                    250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-old-k8s-version-433330           200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-wdrk8                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-old-k8s-version-433330                    100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        kubernetes-dashboard-api-6c85dd6d79-gplb7                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-auth-96f55cbc9-q6w55                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-kong-f487b85cd-7vrxn                0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-web-858bd7466-nt8k8                 100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1250m (15%)  1100m (13%)
	  memory             1020Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node old-k8s-version-433330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x8 over 20m)  kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node old-k8s-version-433330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m                kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           19m                node-controller  Node old-k8s-version-433330 event: Registered Node old-k8s-version-433330 in Controller
	  Normal  NodeReady                19m                kubelet          Node old-k8s-version-433330 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node old-k8s-version-433330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node old-k8s-version-433330 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node old-k8s-version-433330 event: Registered Node old-k8s-version-433330 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [ba54120ef227f793b4872b17bf830934ba52ead424688a987fd2abd98fa2cc0e] <==
	{"level":"info","ts":"2025-12-19T03:05:23.716798Z","caller":"traceutil/trace.go:171","msg":"trace[1286006389] transaction","detail":"{read_only:false; response_revision:637; number_of_response:1; }","duration":"111.325708ms","start":"2025-12-19T03:05:23.605446Z","end":"2025-12-19T03:05:23.716772Z","steps":["trace[1286006389] 'process raft request'  (duration: 111.154063ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.762567Z","caller":"traceutil/trace.go:171","msg":"trace[1170228424] transaction","detail":"{read_only:false; response_revision:638; number_of_response:1; }","duration":"156.711191ms","start":"2025-12-19T03:05:23.605773Z","end":"2025-12-19T03:05:23.762484Z","steps":["trace[1170228424] 'process raft request'  (duration: 156.477047ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.76258Z","caller":"traceutil/trace.go:171","msg":"trace[176958629] transaction","detail":"{read_only:false; response_revision:640; number_of_response:1; }","duration":"155.653437ms","start":"2025-12-19T03:05:23.606903Z","end":"2025-12-19T03:05:23.762556Z","steps":["trace[176958629] 'process raft request'  (duration: 155.495851ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.762606Z","caller":"traceutil/trace.go:171","msg":"trace[11901299] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"154.359966ms","start":"2025-12-19T03:05:23.608234Z","end":"2025-12-19T03:05:23.762594Z","steps":["trace[11901299] 'process raft request'  (duration: 154.193134ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:05:23.762855Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.14879ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:05:23.76292Z","caller":"traceutil/trace.go:171","msg":"trace[491680101] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:641; }","duration":"100.257204ms","start":"2025-12-19T03:05:23.662645Z","end":"2025-12-19T03:05:23.762902Z","steps":["trace[491680101] 'agreement among raft nodes before linearized reading'  (duration: 100.093535ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.763103Z","caller":"traceutil/trace.go:171","msg":"trace[1326394039] transaction","detail":"{read_only:false; response_revision:639; number_of_response:1; }","duration":"156.274686ms","start":"2025-12-19T03:05:23.606816Z","end":"2025-12-19T03:05:23.763091Z","steps":["trace[1326394039] 'process raft request'  (duration: 155.543051ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.926373Z","caller":"traceutil/trace.go:171","msg":"trace[923941046] linearizableReadLoop","detail":"{readStateIndex:670; appliedIndex:668; }","duration":"163.896791ms","start":"2025-12-19T03:05:23.762458Z","end":"2025-12-19T03:05:23.926354Z","steps":["trace[923941046] 'read index received'  (duration: 90.331723ms)","trace[923941046] 'applied index is now lower than readState.Index'  (duration: 73.564544ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:05:23.926443Z","caller":"traceutil/trace.go:171","msg":"trace[1947040731] transaction","detail":"{read_only:false; response_revision:642; number_of_response:1; }","duration":"205.888361ms","start":"2025-12-19T03:05:23.720531Z","end":"2025-12-19T03:05:23.926419Z","steps":["trace[1947040731] 'process raft request'  (duration: 132.202751ms)","trace[1947040731] 'compare'  (duration: 73.474481ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:05:23.92647Z","caller":"traceutil/trace.go:171","msg":"trace[719632072] transaction","detail":"{read_only:false; response_revision:643; number_of_response:1; }","duration":"203.800384ms","start":"2025-12-19T03:05:23.722655Z","end":"2025-12-19T03:05:23.926455Z","steps":["trace[719632072] 'process raft request'  (duration: 203.652153ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:05:23.926492Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.716096ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kubernetes-dashboard/\" range_end:\"/registry/limitranges/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:05:23.926529Z","caller":"traceutil/trace.go:171","msg":"trace[291568890] range","detail":"{range_begin:/registry/limitranges/kubernetes-dashboard/; range_end:/registry/limitranges/kubernetes-dashboard0; response_count:0; response_revision:643; }","duration":"204.766821ms","start":"2025-12-19T03:05:23.721752Z","end":"2025-12-19T03:05:23.926519Z","steps":["trace[291568890] 'agreement among raft nodes before linearized reading'  (duration: 204.695193ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.950771Z","caller":"traceutil/trace.go:171","msg":"trace[910369966] transaction","detail":"{read_only:false; response_revision:645; number_of_response:1; }","duration":"179.377478ms","start":"2025-12-19T03:05:23.77138Z","end":"2025-12-19T03:05:23.950757Z","steps":["trace[910369966] 'process raft request'  (duration: 179.260563ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.950784Z","caller":"traceutil/trace.go:171","msg":"trace[4968190] transaction","detail":"{read_only:false; response_revision:644; number_of_response:1; }","duration":"179.447416ms","start":"2025-12-19T03:05:23.771287Z","end":"2025-12-19T03:05:23.950734Z","steps":["trace[4968190] 'process raft request'  (duration: 179.24612ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.951094Z","caller":"traceutil/trace.go:171","msg":"trace[108964002] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"179.505335ms","start":"2025-12-19T03:05:23.771577Z","end":"2025-12-19T03:05:23.951082Z","steps":["trace[108964002] 'process raft request'  (duration: 179.104746ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:23.951137Z","caller":"traceutil/trace.go:171","msg":"trace[652577346] transaction","detail":"{read_only:false; response_revision:647; number_of_response:1; }","duration":"176.993248ms","start":"2025-12-19T03:05:23.774131Z","end":"2025-12-19T03:05:23.951124Z","steps":["trace[652577346] 'process raft request'  (duration: 176.75032ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:05:23.951195Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.30836ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:05:23.951226Z","caller":"traceutil/trace.go:171","msg":"trace[1368537699] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:647; }","duration":"183.528611ms","start":"2025-12-19T03:05:23.767688Z","end":"2025-12-19T03:05:23.951216Z","steps":["trace[1368537699] 'agreement among raft nodes before linearized reading'  (duration: 183.469758ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:05:34.236332Z","caller":"traceutil/trace.go:171","msg":"trace[532828479] transaction","detail":"{read_only:false; response_revision:733; number_of_response:1; }","duration":"124.186623ms","start":"2025-12-19T03:05:34.112115Z","end":"2025-12-19T03:05:34.236302Z","steps":["trace[532828479] 'process raft request'  (duration: 124.016196ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:15:09.13417Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":975}
	{"level":"info","ts":"2025-12-19T03:15:09.136009Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":975,"took":"1.560442ms","hash":2911588948}
	{"level":"info","ts":"2025-12-19T03:15:09.13606Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2911588948,"revision":975,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:20:09.140625Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1214}
	{"level":"info","ts":"2025-12-19T03:20:09.141731Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1214,"took":"808.598µs","hash":1219419124}
	{"level":"info","ts":"2025-12-19T03:20:09.141763Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1219419124,"revision":1214,"compact-revision":975}
	
	
	==> kernel <==
	 03:23:56 up  1:06,  0 user,  load average: 0.55, 0.54, 1.14
	Linux old-k8s-version-433330 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9243551aa2fc190ba88910a889833487b7b7643a56fe7be4cf6f3b57fe4c6fb8] <==
	I1219 03:21:51.948941       1 main.go:301] handling current node
	I1219 03:22:01.952938       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:22:01.952967       1 main.go:301] handling current node
	I1219 03:22:11.944344       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:22:11.944378       1 main.go:301] handling current node
	I1219 03:22:21.943891       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:22:21.943924       1 main.go:301] handling current node
	I1219 03:22:31.951661       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:22:31.951736       1 main.go:301] handling current node
	I1219 03:22:41.944791       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:22:41.944829       1 main.go:301] handling current node
	I1219 03:22:51.946611       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:22:51.946640       1 main.go:301] handling current node
	I1219 03:23:01.952537       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:23:01.952570       1 main.go:301] handling current node
	I1219 03:23:11.946080       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:23:11.946118       1 main.go:301] handling current node
	I1219 03:23:21.947459       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:23:21.947498       1 main.go:301] handling current node
	I1219 03:23:31.952163       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:23:31.952196       1 main.go:301] handling current node
	I1219 03:23:41.944118       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:23:41.944176       1 main.go:301] handling current node
	I1219 03:23:51.949585       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:23:51.949617       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e80d5d62bfdccfdfe3b1ea4a0e9ada6c25b66f5f5a6a80697856ea062adbe100] <==
	I1219 03:10:10.583004       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:10:10.583125       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:10:10.583190       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:10:10.583344       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:10:10.583413       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:10:10.583485       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:10:10.583543       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:10:10.583600       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:10:10.583658       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:10:10.583735       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:15:10.584188       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:15:10.584289       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:15:10.584352       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:15:10.584448       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:15:10.584519       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:15:10.584743       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:15:10.584830       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:15:10.584915       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:15:10.584995       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:15:10.585051       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:15:10.585117       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:20:10.584921       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 03:20:10.585580       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 03:20:10.585664       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 03:20:10.585744       1 handler.go:232] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	
	
	==> kube-controller-manager [dca7ec4a11ad9581ca73f7dc0a44569b981b375142761b6af7e7d3724cdbe386] <==
	I1219 03:05:29.137946       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-api-6c85dd6d79" duration="10.500982ms"
	I1219 03:05:29.138772       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-api-6c85dd6d79" duration="222.04µs"
	I1219 03:05:30.139560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-auth-96f55cbc9" duration="10.738008ms"
	I1219 03:05:30.141370       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-auth-96f55cbc9" duration="230.761µs"
	I1219 03:05:35.145518       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd" duration="124.735µs"
	I1219 03:05:36.153341       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479" duration="7.771826ms"
	I1219 03:05:36.153487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479" duration="81.765µs"
	I1219 03:05:36.161499       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd" duration="142.354µs"
	I1219 03:05:44.124877       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.783071ms"
	I1219 03:05:44.124969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.031µs"
	I1219 03:05:44.322554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd" duration="10.292955ms"
	I1219 03:05:44.322813       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd" duration="137.021µs"
	I1219 03:05:53.457987       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongplugins.configuration.konghq.com"
	I1219 03:05:53.458044       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongupstreampolicies.configuration.konghq.com"
	I1219 03:05:53.458064       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="tcpingresses.configuration.konghq.com"
	I1219 03:05:53.458080       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="ingressclassparameterses.configuration.konghq.com"
	I1219 03:05:53.458106       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongconsumers.configuration.konghq.com"
	I1219 03:05:53.458129       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongingresses.configuration.konghq.com"
	I1219 03:05:53.458159       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="udpingresses.configuration.konghq.com"
	I1219 03:05:53.458185       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongconsumergroups.configuration.konghq.com"
	I1219 03:05:53.458213       1 resource_quota_monitor.go:210] "QuotaMonitor created object count evaluator" resource="kongcustomentities.configuration.konghq.com"
	I1219 03:05:53.458314       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1219 03:05:53.658752       1 shared_informer.go:318] Caches are synced for resource quota
	I1219 03:05:53.873190       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1219 03:05:53.973659       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-proxy [9a529209e91c73dd4753533a0684e92d16e448ef5aed74f450f82867ec17304c] <==
	I1219 03:05:11.436432       1 server_others.go:69] "Using iptables proxy"
	I1219 03:05:11.452009       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1219 03:05:11.479225       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:05:11.482560       1 server_others.go:152] "Using iptables Proxier"
	I1219 03:05:11.482604       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1219 03:05:11.482625       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1219 03:05:11.482679       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1219 03:05:11.483072       1 server.go:846] "Version info" version="v1.28.0"
	I1219 03:05:11.483108       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:11.485106       1 config.go:97] "Starting endpoint slice config controller"
	I1219 03:05:11.485126       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1219 03:05:11.485951       1 config.go:315] "Starting node config controller"
	I1219 03:05:11.486004       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1219 03:05:11.485951       1 config.go:188] "Starting service config controller"
	I1219 03:05:11.486179       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1219 03:05:11.585764       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1219 03:05:11.587020       1 shared_informer.go:318] Caches are synced for node config
	I1219 03:05:11.587059       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [6764bc2ee8b6d353f059a22bdc0c53e92e10b6672c29a9f618bd8d9812741d3b] <==
	I1219 03:05:08.072216       1 serving.go:348] Generated self-signed cert in-memory
	W1219 03:05:10.585445       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:05:10.585508       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:05:10.585524       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:05:10.585535       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:05:10.628537       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1219 03:05:10.628629       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:10.631418       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1219 03:05:10.631571       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:10.633792       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1219 03:05:10.631594       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1219 03:05:10.734781       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062320     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c53e26af-d9fd-4efc-9354-3b3e505b50f1-tmp-volume\") pod \"kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2\" (UID: \"c53e26af-d9fd-4efc-9354-3b3e505b50f1\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062411     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrwqf\" (UniqueName: \"kubernetes.io/projected/583637fe-b99f-4b55-8173-e40ef125a4da-kube-api-access-lrwqf\") pod \"kubernetes-dashboard-auth-96f55cbc9-q6w55\" (UID: \"583637fe-b99f-4b55-8173-e40ef125a4da\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-96f55cbc9-q6w55"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062450     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-prefix-dir\" (UniqueName: \"kubernetes.io/empty-dir/b6d4ec1c-37c9-45af-84aa-2246d710edf0-kubernetes-dashboard-kong-prefix-dir\") pod \"kubernetes-dashboard-kong-f487b85cd-7vrxn\" (UID: \"b6d4ec1c-37c9-45af-84aa-2246d710edf0\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062475     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6d4ec1c-37c9-45af-84aa-2246d710edf0-kubernetes-dashboard-kong-tmp\") pod \"kubernetes-dashboard-kong-f487b85cd-7vrxn\" (UID: \"b6d4ec1c-37c9-45af-84aa-2246d710edf0\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062493     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/970184f3-748e-4083-93e1-27215e7d3544-tmp-volume\") pod \"kubernetes-dashboard-api-6c85dd6d79-gplb7\" (UID: \"970184f3-748e-4083-93e1-27215e7d3544\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-6c85dd6d79-gplb7"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062547     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/583637fe-b99f-4b55-8173-e40ef125a4da-tmp-volume\") pod \"kubernetes-dashboard-auth-96f55cbc9-q6w55\" (UID: \"583637fe-b99f-4b55-8173-e40ef125a4da\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-96f55cbc9-q6w55"
	Dec 19 03:05:24 old-k8s-version-433330 kubelet[727]: I1219 03:05:24.062611     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kong-custom-dbless-config-volume\" (UniqueName: \"kubernetes.io/configmap/b6d4ec1c-37c9-45af-84aa-2246d710edf0-kong-custom-dbless-config-volume\") pod \"kubernetes-dashboard-kong-f487b85cd-7vrxn\" (UID: \"b6d4ec1c-37c9-45af-84aa-2246d710edf0\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn"
	Dec 19 03:05:27 old-k8s-version-433330 kubelet[727]: I1219 03:05:27.257035     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:27 old-k8s-version-433330 kubelet[727]: I1219 03:05:27.257133     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:28 old-k8s-version-433330 kubelet[727]: I1219 03:05:28.110504     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-web-858bd7466-nt8k8" podStartSLOduration=2.1424808889999998 podCreationTimestamp="2025-12-19 03:05:23 +0000 UTC" firstStartedPulling="2025-12-19 03:05:24.288808133 +0000 UTC m=+17.406061880" lastFinishedPulling="2025-12-19 03:05:27.256749566 +0000 UTC m=+20.374003326" observedRunningTime="2025-12-19 03:05:28.109420313 +0000 UTC m=+21.226674073" watchObservedRunningTime="2025-12-19 03:05:28.110422335 +0000 UTC m=+21.227676096"
	Dec 19 03:05:28 old-k8s-version-433330 kubelet[727]: I1219 03:05:28.215638     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:28 old-k8s-version-433330 kubelet[727]: I1219 03:05:28.215739     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:29 old-k8s-version-433330 kubelet[727]: I1219 03:05:29.086317     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:29 old-k8s-version-433330 kubelet[727]: I1219 03:05:29.086398     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:30 old-k8s-version-433330 kubelet[727]: I1219 03:05:30.129411     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-api-6c85dd6d79-gplb7" podStartSLOduration=3.221513351 podCreationTimestamp="2025-12-19 03:05:23 +0000 UTC" firstStartedPulling="2025-12-19 03:05:24.307578658 +0000 UTC m=+17.424832408" lastFinishedPulling="2025-12-19 03:05:28.215417358 +0000 UTC m=+21.332671100" observedRunningTime="2025-12-19 03:05:29.130889061 +0000 UTC m=+22.248142823" watchObservedRunningTime="2025-12-19 03:05:30.129352043 +0000 UTC m=+23.246605805"
	Dec 19 03:05:30 old-k8s-version-433330 kubelet[727]: I1219 03:05:30.130193     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-auth-96f55cbc9-q6w55" podStartSLOduration=2.356310917 podCreationTimestamp="2025-12-19 03:05:23 +0000 UTC" firstStartedPulling="2025-12-19 03:05:24.31224463 +0000 UTC m=+17.429498372" lastFinishedPulling="2025-12-19 03:05:29.086067921 +0000 UTC m=+22.203321673" observedRunningTime="2025-12-19 03:05:30.128668409 +0000 UTC m=+23.245922169" watchObservedRunningTime="2025-12-19 03:05:30.130134218 +0000 UTC m=+23.247387978"
	Dec 19 03:05:35 old-k8s-version-433330 kubelet[727]: I1219 03:05:35.294232     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:35 old-k8s-version-433330 kubelet[727]: I1219 03:05:35.294310     727 kubelet_resources.go:45] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:36 old-k8s-version-433330 kubelet[727]: I1219 03:05:36.145317     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-6b5c7dc479-wxct2" podStartSLOduration=2.170852672 podCreationTimestamp="2025-12-19 03:05:23 +0000 UTC" firstStartedPulling="2025-12-19 03:05:24.319586522 +0000 UTC m=+17.436840275" lastFinishedPulling="2025-12-19 03:05:35.293995871 +0000 UTC m=+28.411249625" observedRunningTime="2025-12-19 03:05:36.145033222 +0000 UTC m=+29.262286982" watchObservedRunningTime="2025-12-19 03:05:36.145262022 +0000 UTC m=+29.262515784"
	Dec 19 03:05:36 old-k8s-version-433330 kubelet[727]: I1219 03:05:36.161013     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-kong-f487b85cd-7vrxn" podStartSLOduration=2.986982841 podCreationTimestamp="2025-12-19 03:05:23 +0000 UTC" firstStartedPulling="2025-12-19 03:05:24.31920326 +0000 UTC m=+17.436457054" lastFinishedPulling="2025-12-19 03:05:34.493165004 +0000 UTC m=+27.610418746" observedRunningTime="2025-12-19 03:05:36.16087964 +0000 UTC m=+29.278133404" watchObservedRunningTime="2025-12-19 03:05:36.160944533 +0000 UTC m=+29.278198294"
	Dec 19 03:05:42 old-k8s-version-433330 kubelet[727]: I1219 03:05:42.150477     727 scope.go:117] "RemoveContainer" containerID="4a2a86182d6e0257a9f9f666cddbee5bfd7ae7855be029a79247541fbec34a4d"
	Dec 19 03:23:51 old-k8s-version-433330 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 19 03:23:51 old-k8s-version-433330 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 19 03:23:51 old-k8s-version-433330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 19 03:23:51 old-k8s-version-433330 systemd[1]: kubelet.service: Consumed 21.277s CPU time.
	
	
	==> kubernetes-dashboard [162ae6553f9ecd77ec53fa67c114215b76b2baa657cb81e65ce4554f0273417d] <==
	I1219 03:05:27.332655       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:05:27.393367       1 init.go:48] Using in-cluster config
	I1219 03:05:27.393589       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [572a9a98a5b172d9fdd8fb35ee68c37988f906c342e70e89e9bc008440b9c471] <==
	I1219 03:05:28.320430       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:05:28.320512       1 init.go:49] Using in-cluster config
	I1219 03:05:28.320694       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:05:28.320747       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:05:28.320756       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:05:28.320762       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:05:28.327903       1 main.go:119] "Successful initial request to the apiserver" version="v1.28.0"
	I1219 03:05:28.327931       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:05:28.332767       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 03:05:28.336184       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	I1219 03:05:58.341672       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [9757437ad1c1d294af43014db84d4705543c3ebf727fe8d8aa2bcc64dab4ebe2] <==
	E1219 03:21:35.368770       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:22:35.366075       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:23:35.366592       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	10.244.0.1 - - [19/Dec/2025:03:21:04 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:21:14 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:21:24 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:21:28 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:21:34 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:21:44 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:21:54 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:21:58 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:04 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:22:14 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:22:24 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:22:28 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:34 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:22:44 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:22:54 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:22:58 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:04 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:23:14 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:23:24 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:23:28 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:34 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:23:44 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	
	
	==> kubernetes-dashboard [c787e566a13574b826852efdc66cad3575fec6831fdc3a03d85531ffb5a730c9] <==
	I1219 03:05:29.223480       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:05:29.223546       1 init.go:49] Using in-cluster config
	I1219 03:05:29.223660       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [4a2a86182d6e0257a9f9f666cddbee5bfd7ae7855be029a79247541fbec34a4d] <==
	I1219 03:05:11.393839       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:05:41.397217       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b58c35740f2bd58f8c06629e5842dc5bfb5070615f14eabac5641c2021fc5622] <==
	I1219 03:05:42.205301       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1219 03:05:42.214869       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1219 03:05:42.214917       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1219 03:05:59.616530       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1219 03:05:59.616620       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eca1d2cd-fec8-4561-9433-a93751f8f3f7", APIVersion:"v1", ResourceVersion:"774", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-433330_fa64f0b5-7eb2-42bf-8fc2-3317af087ee3 became leader
	I1219 03:05:59.616726       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-433330_fa64f0b5-7eb2-42bf-8fc2-3317af087ee3!
	I1219 03:05:59.716964       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-433330_fa64f0b5-7eb2-42bf-8fc2-3317af087ee3!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-433330 -n old-k8s-version-433330
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-433330 -n old-k8s-version-433330: exit status 2 (362.798392ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-433330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-278042 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-278042 --alsologtostderr -v=1: exit status 80 (1.748680375s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-278042 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 03:23:55.061141  369065 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:23:55.061396  369065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:23:55.061405  369065 out.go:374] Setting ErrFile to fd 2...
	I1219 03:23:55.061409  369065 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:23:55.061673  369065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:23:55.061993  369065 out.go:368] Setting JSON to false
	I1219 03:23:55.062012  369065 mustload.go:66] Loading cluster: no-preload-278042
	I1219 03:23:55.062359  369065 config.go:182] Loaded profile config "no-preload-278042": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:23:55.062832  369065 cli_runner.go:164] Run: docker container inspect no-preload-278042 --format={{.State.Status}}
	I1219 03:23:55.083741  369065 host.go:66] Checking if "no-preload-278042" exists ...
	I1219 03:23:55.084029  369065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:23:55.154489  369065 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-19 03:23:55.143970471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:23:55.155262  369065 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765965980-22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765965980-22186-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-278042 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1219 03:23:55.157176  369065 out.go:179] * Pausing node no-preload-278042 ... 
	I1219 03:23:55.158441  369065 host.go:66] Checking if "no-preload-278042" exists ...
	I1219 03:23:55.158752  369065 ssh_runner.go:195] Run: systemctl --version
	I1219 03:23:55.158813  369065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-278042
	I1219 03:23:55.178803  369065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/no-preload-278042/id_rsa Username:docker}
	I1219 03:23:55.282342  369065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:23:55.294585  369065 pause.go:52] kubelet running: true
	I1219 03:23:55.294648  369065 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1219 03:23:55.491827  369065 cri.go:57] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1219 03:23:55.491934  369065 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1219 03:23:55.568789  369065 cri.go:92] found id: "7d6861325db2a3ac5ccac816c8e37a3daba04c6ecb4c268229d0eed2b47c364f"
	I1219 03:23:55.568817  369065 cri.go:92] found id: "88f8999e01d5bc23ebc968525542d039ae5c65ebd88f7ecad360345dc8277d94"
	I1219 03:23:55.568824  369065 cri.go:92] found id: "53f1be74e873df0c32c600b228ba909dde859aa38c23f9a71f536c90aa4e096f"
	I1219 03:23:55.568829  369065 cri.go:92] found id: "98dcabe770e7dcf718bfbc7938b663e3dd19fd9ad86c2bd261a4099febad9b1b"
	I1219 03:23:55.568834  369065 cri.go:92] found id: "757ccd2caa9cd35651079514b95b85a3612146f0d5b17fa735322d1e2ee036f1"
	I1219 03:23:55.568839  369065 cri.go:92] found id: "5f148a7e487d8dd3e55b516e027c5d508a26bdfe82dca079a9e980052c56ba2a"
	I1219 03:23:55.568845  369065 cri.go:92] found id: "001407ac1b90920309b7c79710e785dd82d392c6a8a43c444682d0837efd8cae"
	I1219 03:23:55.568850  369065 cri.go:92] found id: "973ccccab25764a0f9dd88e6a5d1d5565060554451f1dcc3dd0e68a9aed4c9f2"
	I1219 03:23:55.568854  369065 cri.go:92] found id: "821b9cbc72eb687248584e5b24c8fee86a84d2d127bb4360eeb9e0bc0f2c67ec"
	I1219 03:23:55.568862  369065 cri.go:92] found id: "5935e257f3a0964bf239f408c9308c3b84961c75f06f32b2fda50133fe1ddbbd"
	I1219 03:23:55.568867  369065 cri.go:92] found id: "29fec7f14635a794f200efe276e62a0fc3151ea3d427cb21da297c53114fd8b9"
	I1219 03:23:55.568871  369065 cri.go:92] found id: "94493b4e713137493bb6661177bb8b8dbc3d2479ba340627fb0865ea869c74e9"
	I1219 03:23:55.568876  369065 cri.go:92] found id: "0c57b1705660a2a7d6002a4a19c5d034115548c1dd0133f8edd48a88c90f5f15"
	I1219 03:23:55.568880  369065 cri.go:92] found id: "bba0b0d89d520cc6ca6a07611a31a2778cb1e41e66784ac255b63f970adcffb7"
	I1219 03:23:55.568897  369065 cri.go:92] found id: "d438e50bdc5cf86c6ad101cf6a3ca9c6c7091524bb7ffd95705de1d1a5ed8994"
	I1219 03:23:55.568918  369065 cri.go:92] found id: ""
	I1219 03:23:55.568974  369065 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 03:23:55.581836  369065 retry.go:31] will retry after 341.670533ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:23:55Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:23:55.924399  369065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:23:55.938445  369065 pause.go:52] kubelet running: false
	I1219 03:23:55.938514  369065 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1219 03:23:56.121843  369065 cri.go:57] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1219 03:23:56.121919  369065 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1219 03:23:56.188172  369065 cri.go:92] found id: "7d6861325db2a3ac5ccac816c8e37a3daba04c6ecb4c268229d0eed2b47c364f"
	I1219 03:23:56.188197  369065 cri.go:92] found id: "88f8999e01d5bc23ebc968525542d039ae5c65ebd88f7ecad360345dc8277d94"
	I1219 03:23:56.188203  369065 cri.go:92] found id: "53f1be74e873df0c32c600b228ba909dde859aa38c23f9a71f536c90aa4e096f"
	I1219 03:23:56.188213  369065 cri.go:92] found id: "98dcabe770e7dcf718bfbc7938b663e3dd19fd9ad86c2bd261a4099febad9b1b"
	I1219 03:23:56.188218  369065 cri.go:92] found id: "757ccd2caa9cd35651079514b95b85a3612146f0d5b17fa735322d1e2ee036f1"
	I1219 03:23:56.188223  369065 cri.go:92] found id: "5f148a7e487d8dd3e55b516e027c5d508a26bdfe82dca079a9e980052c56ba2a"
	I1219 03:23:56.188227  369065 cri.go:92] found id: "001407ac1b90920309b7c79710e785dd82d392c6a8a43c444682d0837efd8cae"
	I1219 03:23:56.188232  369065 cri.go:92] found id: "973ccccab25764a0f9dd88e6a5d1d5565060554451f1dcc3dd0e68a9aed4c9f2"
	I1219 03:23:56.188236  369065 cri.go:92] found id: "821b9cbc72eb687248584e5b24c8fee86a84d2d127bb4360eeb9e0bc0f2c67ec"
	I1219 03:23:56.188258  369065 cri.go:92] found id: "5935e257f3a0964bf239f408c9308c3b84961c75f06f32b2fda50133fe1ddbbd"
	I1219 03:23:56.188267  369065 cri.go:92] found id: "29fec7f14635a794f200efe276e62a0fc3151ea3d427cb21da297c53114fd8b9"
	I1219 03:23:56.188272  369065 cri.go:92] found id: "94493b4e713137493bb6661177bb8b8dbc3d2479ba340627fb0865ea869c74e9"
	I1219 03:23:56.188276  369065 cri.go:92] found id: "0c57b1705660a2a7d6002a4a19c5d034115548c1dd0133f8edd48a88c90f5f15"
	I1219 03:23:56.188286  369065 cri.go:92] found id: "bba0b0d89d520cc6ca6a07611a31a2778cb1e41e66784ac255b63f970adcffb7"
	I1219 03:23:56.188291  369065 cri.go:92] found id: "d438e50bdc5cf86c6ad101cf6a3ca9c6c7091524bb7ffd95705de1d1a5ed8994"
	I1219 03:23:56.188300  369065 cri.go:92] found id: ""
	I1219 03:23:56.188343  369065 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 03:23:56.200239  369065 retry.go:31] will retry after 253.033775ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:23:56Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:23:56.453652  369065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:23:56.466654  369065 pause.go:52] kubelet running: false
	I1219 03:23:56.466716  369065 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1219 03:23:56.650795  369065 cri.go:57] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1219 03:23:56.650878  369065 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1219 03:23:56.725250  369065 cri.go:92] found id: "7d6861325db2a3ac5ccac816c8e37a3daba04c6ecb4c268229d0eed2b47c364f"
	I1219 03:23:56.725275  369065 cri.go:92] found id: "88f8999e01d5bc23ebc968525542d039ae5c65ebd88f7ecad360345dc8277d94"
	I1219 03:23:56.725281  369065 cri.go:92] found id: "53f1be74e873df0c32c600b228ba909dde859aa38c23f9a71f536c90aa4e096f"
	I1219 03:23:56.725285  369065 cri.go:92] found id: "98dcabe770e7dcf718bfbc7938b663e3dd19fd9ad86c2bd261a4099febad9b1b"
	I1219 03:23:56.725289  369065 cri.go:92] found id: "757ccd2caa9cd35651079514b95b85a3612146f0d5b17fa735322d1e2ee036f1"
	I1219 03:23:56.725294  369065 cri.go:92] found id: "5f148a7e487d8dd3e55b516e027c5d508a26bdfe82dca079a9e980052c56ba2a"
	I1219 03:23:56.725298  369065 cri.go:92] found id: "001407ac1b90920309b7c79710e785dd82d392c6a8a43c444682d0837efd8cae"
	I1219 03:23:56.725302  369065 cri.go:92] found id: "973ccccab25764a0f9dd88e6a5d1d5565060554451f1dcc3dd0e68a9aed4c9f2"
	I1219 03:23:56.725307  369065 cri.go:92] found id: "821b9cbc72eb687248584e5b24c8fee86a84d2d127bb4360eeb9e0bc0f2c67ec"
	I1219 03:23:56.725315  369065 cri.go:92] found id: "5935e257f3a0964bf239f408c9308c3b84961c75f06f32b2fda50133fe1ddbbd"
	I1219 03:23:56.725328  369065 cri.go:92] found id: "29fec7f14635a794f200efe276e62a0fc3151ea3d427cb21da297c53114fd8b9"
	I1219 03:23:56.725337  369065 cri.go:92] found id: "94493b4e713137493bb6661177bb8b8dbc3d2479ba340627fb0865ea869c74e9"
	I1219 03:23:56.725342  369065 cri.go:92] found id: "0c57b1705660a2a7d6002a4a19c5d034115548c1dd0133f8edd48a88c90f5f15"
	I1219 03:23:56.725348  369065 cri.go:92] found id: "bba0b0d89d520cc6ca6a07611a31a2778cb1e41e66784ac255b63f970adcffb7"
	I1219 03:23:56.725354  369065 cri.go:92] found id: "d438e50bdc5cf86c6ad101cf6a3ca9c6c7091524bb7ffd95705de1d1a5ed8994"
	I1219 03:23:56.725364  369065 cri.go:92] found id: ""
	I1219 03:23:56.725429  369065 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 03:23:56.741082  369065 out.go:203] 
	W1219 03:23:56.742219  369065 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:23:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:23:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 03:23:56.742236  369065 out.go:285] * 
	* 
	W1219 03:23:56.746484  369065 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 03:23:56.747781  369065 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-278042 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-278042
helpers_test.go:244: (dbg) docker inspect no-preload-278042:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35",
	        "Created": "2025-12-19T03:03:43.244016686Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 339111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:05:01.069592419Z",
	            "FinishedAt": "2025-12-19T03:05:00.08601805Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35/hostname",
	        "HostsPath": "/var/lib/docker/containers/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35/hosts",
	        "LogPath": "/var/lib/docker/containers/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35-json.log",
	        "Name": "/no-preload-278042",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-278042:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-278042",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35",
	                "LowerDir": "/var/lib/docker/overlay2/1fa6da707f89a44d8d74b00c5a1e05754941243c2c7fe2df12c57972e765ab6a-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1fa6da707f89a44d8d74b00c5a1e05754941243c2c7fe2df12c57972e765ab6a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1fa6da707f89a44d8d74b00c5a1e05754941243c2c7fe2df12c57972e765ab6a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1fa6da707f89a44d8d74b00c5a1e05754941243c2c7fe2df12c57972e765ab6a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-278042",
	                "Source": "/var/lib/docker/volumes/no-preload-278042/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-278042",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-278042",
	                "name.minikube.sigs.k8s.io": "no-preload-278042",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "86d771358686193a8ee27ccd7dd8113a32471ee83b7a9b27de2361ca35da19bf",
	            "SandboxKey": "/var/run/docker/netns/86d771358686",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-278042": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "40e663ebb9c92fe8e9b5d1c06f073100d83df79efa76e295e52399b291babbbc",
	                    "EndpointID": "8aa1f1b0831c873e8bd4b8eb538f83b636c1962501683e75418947d1eb28c78e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "7e:f0:a4:c4:bd:57",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-278042",
	                        "c49a965a7d8d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-278042 -n no-preload-278042
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-278042 -n no-preload-278042: exit status 2 (351.21575ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-278042 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-278042 logs -n 25: (1.348708641s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-821749 sudo crio config                                                                                                                                                                                                     │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ delete  │ -p custom-flannel-821749                                                                                                                                                                                                                      │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-433330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-278042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ delete  │ -p disable-driver-mounts-507648                                                                                                                                                                                                               │ disable-driver-mounts-507648 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ stop    │ -p old-k8s-version-433330 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-278042 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-433330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-433330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p no-preload-278042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-278042 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p embed-certs-805185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p embed-certs-805185 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-717222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-717222 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-805185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-717222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ image   │ old-k8s-version-433330 image list --format=json                                                                                                                                                                                               │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p old-k8s-version-433330 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ image   │ no-preload-278042 image list --format=json                                                                                                                                                                                                    │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p no-preload-278042 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:05:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:05:53.092301  352121 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:05:53.092394  352121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:53.092398  352121 out.go:374] Setting ErrFile to fd 2...
	I1219 03:05:53.092402  352121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:53.092674  352121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:05:53.093206  352121 out.go:368] Setting JSON to false
	I1219 03:05:53.094527  352121 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2904,"bootTime":1766110649,"procs":396,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:05:53.094626  352121 start.go:143] virtualization: kvm guest
	I1219 03:05:53.096521  352121 out.go:179] * [default-k8s-diff-port-717222] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:05:53.097989  352121 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:05:53.098033  352121 notify.go:221] Checking for updates...
	I1219 03:05:53.100179  352121 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:05:53.101370  352121 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:53.102535  352121 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:05:53.104239  352121 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:05:53.105473  352121 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:05:53.107110  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:53.107760  352121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:05:53.140217  352121 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:05:53.140357  352121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:53.211137  352121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 03:05:53.198937136 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:53.211231  352121 docker.go:319] overlay module found
	I1219 03:05:53.212919  352121 out.go:179] * Using the docker driver based on existing profile
	I1219 03:05:53.214070  352121 start.go:309] selected driver: docker
	I1219 03:05:53.214085  352121 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:53.214190  352121 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:05:53.214819  352121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:53.291099  352121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 03:05:53.277402722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:53.291489  352121 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:53.291541  352121 cni.go:84] Creating CNI manager for ""
	I1219 03:05:53.291616  352121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:53.291670  352121 start.go:353] cluster config:
	{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:53.293313  352121 out.go:179] * Starting "default-k8s-diff-port-717222" primary control-plane node in "default-k8s-diff-port-717222" cluster
	I1219 03:05:53.300833  352121 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:05:53.301994  352121 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:05:53.303248  352121 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:05:53.303295  352121 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 03:05:53.303311  352121 cache.go:65] Caching tarball of preloaded images
	I1219 03:05:53.303351  352121 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:05:53.303428  352121 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:05:53.303442  352121 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 03:05:53.303571  352121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json ...
	I1219 03:05:53.330621  352121 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:05:53.330652  352121 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:05:53.330671  352121 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:05:53.330767  352121 start.go:360] acquireMachinesLock for default-k8s-diff-port-717222: {Name:mk586c44d14a94c58ce6de7426a02f7319eb0716 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:53.330860  352121 start.go:364] duration metric: took 49.392µs to acquireMachinesLock for "default-k8s-diff-port-717222"
	I1219 03:05:53.330886  352121 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:05:53.330893  352121 fix.go:54] fixHost starting: 
	I1219 03:05:53.331229  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:53.353305  352121 fix.go:112] recreateIfNeeded on default-k8s-diff-port-717222: state=Stopped err=<nil>
	W1219 03:05:53.353340  352121 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:05:52.784975  350034 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:52.785039  350034 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:05:52.785129  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.788781  350034 addons.go:239] Setting addon default-storageclass=true in "embed-certs-805185"
	W1219 03:05:52.788860  350034 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:05:52.788932  350034 host.go:66] Checking if "embed-certs-805185" exists ...
	I1219 03:05:52.789603  350034 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:52.803693  350034 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:52.803884  350034 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:05:52.803963  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.829487  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.833372  350034 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:52.833419  350034 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:05:52.833502  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.838747  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.863871  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.921723  350034 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:52.937553  350034 node_ready.go:35] waiting up to 6m0s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:52.960476  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:52.967329  350034 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:05:52.987994  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:54.352944  350034 node_ready.go:49] node "embed-certs-805185" is "Ready"
	I1219 03:05:54.352990  350034 node_ready.go:38] duration metric: took 1.41538432s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:54.353006  350034 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:54.353065  350034 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:05:54.864404  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.903892508s)
	I1219 03:05:54.864464  350034 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.897096827s)
	I1219 03:05:54.864523  350034 api_server.go:72] duration metric: took 2.115189515s to wait for apiserver process to appear ...
	I1219 03:05:54.864541  350034 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:05:54.864542  350034 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:54.864474  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.876456755s)
	I1219 03:05:54.864569  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:54.868475  350034 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:05:54.869335  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:54.869359  350034 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:55.365237  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:55.371297  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:55.371332  350034 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:53.354901  352121 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-717222" ...
	I1219 03:05:53.354983  352121 cli_runner.go:164] Run: docker start default-k8s-diff-port-717222
	I1219 03:05:53.699823  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:53.723885  352121 kic.go:430] container "default-k8s-diff-port-717222" state is running.
	I1219 03:05:53.724437  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:53.748911  352121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json ...
	I1219 03:05:53.749250  352121 machine.go:94] provisionDockerMachine start ...
	I1219 03:05:53.749328  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:53.779816  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:53.780159  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:53.780174  352121 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:05:53.780830  352121 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1219 03:05:56.928761  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-717222
	
	I1219 03:05:56.928788  352121 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-717222"
	I1219 03:05:56.928865  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:56.949113  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:56.949333  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:56.949347  352121 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-717222 && echo "default-k8s-diff-port-717222" | sudo tee /etc/hostname
	I1219 03:05:57.104743  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-717222
	
	I1219 03:05:57.104815  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.123728  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:57.124049  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:57.124072  352121 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-717222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-717222/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-717222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:05:57.273322  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:05:57.273360  352121 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 03:05:57.273404  352121 ubuntu.go:190] setting up certificates
	I1219 03:05:57.273418  352121 provision.go:84] configureAuth start
	I1219 03:05:57.273481  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:57.295306  352121 provision.go:143] copyHostCerts
	I1219 03:05:57.295369  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem, removing ...
	I1219 03:05:57.295386  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem
	I1219 03:05:57.295456  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 03:05:57.295579  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem, removing ...
	I1219 03:05:57.295589  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem
	I1219 03:05:57.295628  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 03:05:57.295782  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem, removing ...
	I1219 03:05:57.295804  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem
	I1219 03:05:57.295852  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 03:05:57.295936  352121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-717222 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-717222 localhost minikube]
	I1219 03:05:57.394982  352121 provision.go:177] copyRemoteCerts
	I1219 03:05:57.395055  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:05:57.395092  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.415801  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:57.518597  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:05:57.537101  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:05:57.555959  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:05:57.575813  352121 provision.go:87] duration metric: took 302.381293ms to configureAuth
	I1219 03:05:57.575844  352121 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:05:57.576048  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:57.576149  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.595953  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:57.596182  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:57.596200  352121 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:05:58.066496  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:05:58.066522  352121 machine.go:97] duration metric: took 4.317251418s to provisionDockerMachine
	I1219 03:05:58.066537  352121 start.go:293] postStartSetup for "default-k8s-diff-port-717222" (driver="docker")
	I1219 03:05:58.066550  352121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:05:58.066636  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:05:58.066688  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.089735  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:55.753357  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:05:55.865655  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:55.871156  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1219 03:05:55.872281  350034 api_server.go:141] control plane version: v1.34.3
	I1219 03:05:55.872314  350034 api_server.go:131] duration metric: took 1.007762314s to wait for apiserver health ...
	I1219 03:05:55.872322  350034 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:55.876404  350034 system_pods.go:59] 8 kube-system pods found
	I1219 03:05:55.876440  350034 system_pods.go:61] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:55.876448  350034 system_pods.go:61] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:55.876455  350034 system_pods.go:61] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:55.876464  350034 system_pods.go:61] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:55.876469  350034 system_pods.go:61] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:55.876474  350034 system_pods.go:61] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:55.876484  350034 system_pods.go:61] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:55.876491  350034 system_pods.go:61] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:55.876501  350034 system_pods.go:74] duration metric: took 4.173475ms to wait for pod list to return data ...
	I1219 03:05:55.876508  350034 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:55.879480  350034 default_sa.go:45] found service account: "default"
	I1219 03:05:55.879505  350034 default_sa.go:55] duration metric: took 2.991473ms for default service account to be created ...
	I1219 03:05:55.879514  350034 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:55.882762  350034 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:55.882799  350034 system_pods.go:89] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:55.882811  350034 system_pods.go:89] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:55.882822  350034 system_pods.go:89] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:55.882831  350034 system_pods.go:89] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:55.882844  350034 system_pods.go:89] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:55.882860  350034 system_pods.go:89] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:55.882873  350034 system_pods.go:89] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:55.882885  350034 system_pods.go:89] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:55.882898  350034 system_pods.go:126] duration metric: took 3.377076ms to wait for k8s-apps to be running ...
	I1219 03:05:55.882911  350034 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:55.882965  350034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:58.664447  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (2.910960851s)
	I1219 03:05:58.664545  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:58.664639  350034 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.781656633s)
	I1219 03:05:58.664656  350034 system_svc.go:56] duration metric: took 2.781742979s WaitForService to wait for kubelet
	I1219 03:05:58.664665  350034 kubeadm.go:587] duration metric: took 5.915331625s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:58.664687  350034 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:58.675538  350034 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:58.675644  350034 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:58.675679  350034 node_conditions.go:105] duration metric: took 10.98619ms to run NodePressure ...
	I1219 03:05:58.675732  350034 start.go:242] waiting for startup goroutines ...
	I1219 03:05:58.853716  350034 addons.go:500] Verifying addon dashboard=true in "embed-certs-805185"
	I1219 03:05:58.854075  350034 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:58.881488  350034 out.go:179] * Verifying dashboard addon...
	I1219 03:05:58.197271  352121 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:05:58.201133  352121 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:05:58.201178  352121 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:05:58.201190  352121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 03:05:58.201247  352121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 03:05:58.201317  352121 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem -> 85362.pem in /etc/ssl/certs
	I1219 03:05:58.201402  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:05:58.208949  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:05:58.226900  352121 start.go:296] duration metric: took 160.349121ms for postStartSetup
	I1219 03:05:58.227004  352121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:05:58.227051  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.247047  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.350362  352121 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:05:58.356036  352121 fix.go:56] duration metric: took 5.025136862s for fixHost
	I1219 03:05:58.356065  352121 start.go:83] releasing machines lock for "default-k8s-diff-port-717222", held for 5.025188602s
	I1219 03:05:58.356141  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:58.375473  352121 ssh_runner.go:195] Run: cat /version.json
	I1219 03:05:58.375532  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.375568  352121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:05:58.375641  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.397142  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.397516  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.556811  352121 ssh_runner.go:195] Run: systemctl --version
	I1219 03:05:58.565677  352121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:05:58.614666  352121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:05:58.621187  352121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:05:58.621270  352121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:05:58.633014  352121 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:05:58.633047  352121 start.go:496] detecting cgroup driver to use...
	I1219 03:05:58.633088  352121 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:05:58.633137  352121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:05:58.657800  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:05:58.678804  352121 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:05:58.678883  352121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:05:58.705625  352121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:05:58.728926  352121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:05:58.831932  352121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:05:58.929486  352121 docker.go:234] disabling docker service ...
	I1219 03:05:58.929541  352121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:05:58.944770  352121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:05:58.958306  352121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:05:59.062259  352121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:05:59.150570  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:05:59.163650  352121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:05:59.178297  352121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:05:59.178363  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.187297  352121 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 03:05:59.187364  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.198433  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.208772  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.217632  352121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:05:59.226347  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.236018  352121 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.245884  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.255295  352121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:05:59.263398  352121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:05:59.271671  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:59.372273  352121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:05:59.546574  352121 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:05:59.546667  352121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:05:59.552176  352121 start.go:564] Will wait 60s for crictl version
	I1219 03:05:59.552257  352121 ssh_runner.go:195] Run: which crictl
	I1219 03:05:59.557121  352121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:05:59.591016  352121 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 03:05:59.591106  352121 ssh_runner.go:195] Run: crio --version
	I1219 03:05:59.628329  352121 ssh_runner.go:195] Run: crio --version
	I1219 03:05:59.666833  352121 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1219 03:05:58.884349  350034 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:05:58.887370  350034 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:05:58.887396  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.389654  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.888847  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.388430  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.668307  352121 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-717222 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:05:59.692145  352121 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1219 03:05:59.697760  352121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:59.712259  352121 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:05:59.712414  352121 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:05:59.712469  352121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:59.758952  352121 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:59.758977  352121 crio.go:433] Images already preloaded, skipping extraction
	I1219 03:05:59.759041  352121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:59.795008  352121 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:59.795037  352121 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:05:59.795046  352121 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.3 crio true true} ...
	I1219 03:05:59.795178  352121 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-717222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:05:59.795259  352121 ssh_runner.go:195] Run: crio config
	I1219 03:05:59.864102  352121 cni.go:84] Creating CNI manager for ""
	I1219 03:05:59.864128  352121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:59.864150  352121 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:05:59.864179  352121 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-717222 NodeName:default-k8s-diff-port-717222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:05:59.864362  352121 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-717222"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:05:59.864462  352121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:05:59.875851  352121 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:05:59.875922  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:05:59.884464  352121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1219 03:05:59.899134  352121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:05:59.915215  352121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1219 03:05:59.929131  352121 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:05:59.933193  352121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:59.944310  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:06:00.032535  352121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:06:00.050232  352121 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222 for IP: 192.168.94.2
	I1219 03:06:00.050255  352121 certs.go:195] generating shared ca certs ...
	I1219 03:06:00.050283  352121 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.050682  352121 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 03:06:00.050862  352121 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 03:06:00.050911  352121 certs.go:257] generating profile certs ...
	I1219 03:06:00.051065  352121 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/client.key
	I1219 03:06:00.051146  352121 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.key.a5493f25
	I1219 03:06:00.051208  352121 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.key
	I1219 03:06:00.051349  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem (1338 bytes)
	W1219 03:06:00.051384  352121 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536_empty.pem, impossibly tiny 0 bytes
	I1219 03:06:00.051393  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:06:00.051430  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:06:00.051457  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:06:00.051489  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 03:06:00.051550  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:06:00.053211  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:06:00.075839  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:06:00.100173  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:06:00.124618  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 03:06:00.149956  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:06:00.170353  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:06:00.189275  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:06:00.209284  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 03:06:00.228628  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:06:00.250172  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem --> /usr/share/ca-certificates/8536.pem (1338 bytes)
	I1219 03:06:00.275836  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /usr/share/ca-certificates/85362.pem (1708 bytes)
	I1219 03:06:00.299659  352121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:06:00.316102  352121 ssh_runner.go:195] Run: openssl version
	I1219 03:06:00.322905  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.332781  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:06:00.343197  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.348016  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.348075  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.393419  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:06:00.402299  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.411729  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8536.pem /etc/ssl/certs/8536.pem
	I1219 03:06:00.420351  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.424769  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:33 /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.424832  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.477146  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:06:00.485938  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.495504  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85362.pem /etc/ssl/certs/85362.pem
	I1219 03:06:00.504981  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.509807  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:33 /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.509870  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.553357  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:06:00.563012  352121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:06:00.568396  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:06:00.622821  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:06:00.682127  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:06:00.745366  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:06:00.806418  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:06:00.854164  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:06:00.896520  352121 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:06:00.896640  352121 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:06:00.896757  352121 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:06:00.932488  352121 cri.go:92] found id: "1340a2f59347d94cb67897c14b11276bb491943caa3b94ed7080db25e5d5dc78"
	I1219 03:06:00.932513  352121 cri.go:92] found id: "725faee3812c5d3136b764571b2c9b270517680beb590d21de31aad0b5d0b89a"
	I1219 03:06:00.932519  352121 cri.go:92] found id: "d2c496c53c69616e93f161ca59c69ebaa3d4de81e94460893270a29e321886eb"
	I1219 03:06:00.932523  352121 cri.go:92] found id: "0fb4e8910a64fc9fe34f2319ccb58566695768a8c137e01b391e3004481cb992"
	I1219 03:06:00.932538  352121 cri.go:92] found id: ""
	I1219 03:06:00.932585  352121 ssh_runner.go:195] Run: sudo runc list -f json
	W1219 03:06:00.949831  352121 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:06:00Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:06:00.949907  352121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:06:00.960453  352121 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:06:00.960473  352121 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:06:00.960520  352121 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:06:00.969747  352121 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:06:00.971295  352121 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-717222" does not appear in /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:06:00.972255  352121 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-4987/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-717222" cluster setting kubeconfig missing "default-k8s-diff-port-717222" context setting]
	I1219 03:06:00.973245  352121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.975092  352121 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:06:00.986333  352121 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1219 03:06:00.986370  352121 kubeadm.go:602] duration metric: took 25.891084ms to restartPrimaryControlPlane
	I1219 03:06:00.986381  352121 kubeadm.go:403] duration metric: took 89.870495ms to StartCluster
	I1219 03:06:00.986399  352121 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.986465  352121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:06:00.988984  352121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.989885  352121 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:06:00.990020  352121 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:06:00.990126  352121 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990162  352121 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-717222"
	W1219 03:06:00.990171  352121 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:06:00.990156  352121 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990202  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:00.990217  352121 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-717222"
	W1219 03:06:00.990229  352121 addons.go:248] addon dashboard should already be in state true
	I1219 03:06:00.990249  352121 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990136  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:06:00.990260  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:00.990288  352121 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-717222"
	I1219 03:06:00.990658  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.990728  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.990961  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.993354  352121 out.go:179] * Verifying Kubernetes components...
	I1219 03:06:00.994888  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:06:01.021617  352121 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:06:01.021640  352121 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:06:01.021711  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.025911  352121 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:06:01.027271  352121 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:06:01.027318  352121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:06:01.027479  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.027961  352121 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-717222"
	W1219 03:06:01.027983  352121 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:06:01.028009  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:01.028455  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:01.058853  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.066205  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.072839  352121 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:06:01.072864  352121 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:06:01.072921  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.100032  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.171523  352121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:06:01.187709  352121 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:06:01.188379  352121 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:06:01.193383  352121 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:06:01.197836  352121 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:06:01.203342  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:06:01.235359  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:06:02.386268  352121 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.188390942s)
	I1219 03:06:02.386368  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:06:03.037231  352121 node_ready.go:49] node "default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:03.037269  352121 node_ready.go:38] duration metric: took 1.848860443s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:06:03.037285  352121 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:06:03.037340  352121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:06:00.888463  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.390478  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.889172  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.392135  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.887803  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.395356  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.887597  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.388564  350034 kapi.go:107] duration metric: took 5.504214625s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:06:04.390500  350034 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-805185 addons enable metrics-server
	
	I1219 03:06:04.392729  350034 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1219 03:06:04.394312  350034 addons.go:546] duration metric: took 11.644922855s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:06:04.394358  350034 start.go:247] waiting for cluster config update ...
	I1219 03:06:04.394374  350034 start.go:256] writing updated cluster config ...
	I1219 03:06:04.394679  350034 ssh_runner.go:195] Run: rm -f paused
	I1219 03:06:04.399689  350034 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:04.404337  350034 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:03.829353  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.625958762s)
	I1219 03:06:03.829435  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.59404762s)
	I1219 03:06:06.147413  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.761002493s)
	I1219 03:06:06.147503  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:06:06.147635  352121 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.110275887s)
	I1219 03:06:06.147788  352121 api_server.go:72] duration metric: took 5.157862117s to wait for apiserver process to appear ...
	I1219 03:06:06.147803  352121 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:06:06.147822  352121 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:06:06.158489  352121 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1219 03:06:06.160663  352121 api_server.go:141] control plane version: v1.34.3
	I1219 03:06:06.160691  352121 api_server.go:131] duration metric: took 12.881794ms to wait for apiserver health ...
	I1219 03:06:06.160726  352121 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:06:06.167594  352121 system_pods.go:59] 8 kube-system pods found
	I1219 03:06:06.167645  352121 system_pods.go:61] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:06:06.167662  352121 system_pods.go:61] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:06:06.167671  352121 system_pods.go:61] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:06:06.167680  352121 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:06:06.167689  352121 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:06:06.167695  352121 system_pods.go:61] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:06:06.167733  352121 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:06:06.167740  352121 system_pods.go:61] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:06:06.167750  352121 system_pods.go:74] duration metric: took 7.015044ms to wait for pod list to return data ...
	I1219 03:06:06.167763  352121 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:06:06.172355  352121 default_sa.go:45] found service account: "default"
	I1219 03:06:06.172383  352121 default_sa.go:55] duration metric: took 4.612941ms for default service account to be created ...
	I1219 03:06:06.172394  352121 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:06:06.266672  352121 system_pods.go:86] 8 kube-system pods found
	I1219 03:06:06.266726  352121 system_pods.go:89] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:06:06.266739  352121 system_pods.go:89] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:06:06.266748  352121 system_pods.go:89] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:06:06.266766  352121 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:06:06.266780  352121 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:06:06.266794  352121 system_pods.go:89] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:06:06.266807  352121 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:06:06.266814  352121 system_pods.go:89] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:06:06.266828  352121 system_pods.go:126] duration metric: took 94.426368ms to wait for k8s-apps to be running ...
	I1219 03:06:06.266837  352121 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:06:06.266897  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:06:06.349488  352121 system_svc.go:56] duration metric: took 82.641061ms WaitForService to wait for kubelet
	I1219 03:06:06.349525  352121 kubeadm.go:587] duration metric: took 5.359602561s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:06:06.349549  352121 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:06:06.350103  352121 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-717222"
	I1219 03:06:06.350491  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:06.365789  352121 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:06:06.365826  352121 node_conditions.go:123] node cpu capacity is 8
	I1219 03:06:06.365844  352121 node_conditions.go:105] duration metric: took 16.288389ms to run NodePressure ...
	I1219 03:06:06.365861  352121 start.go:242] waiting for startup goroutines ...
	I1219 03:06:06.386126  352121 out.go:179] * Verifying dashboard addon...
	I1219 03:06:06.389114  352121 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:06:06.393439  352121 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:06:07.395667  352121 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:06:07.395691  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.895613  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:06:06.411012  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:08.413428  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:08.393607  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.894323  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.395092  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.892933  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.393259  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.894360  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.394632  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.893492  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.393217  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.892698  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.423636  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.893633  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.392860  352121 kapi.go:107] duration metric: took 8.003743298s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:06:14.394678  352121 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-717222 addons enable metrics-server
	
	I1219 03:06:14.396056  352121 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1219 03:06:10.911668  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:13.410917  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:15.411844  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:14.397218  352121 addons.go:546] duration metric: took 13.407198611s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:06:14.397266  352121 start.go:247] waiting for cluster config update ...
	I1219 03:06:14.397281  352121 start.go:256] writing updated cluster config ...
	I1219 03:06:14.397606  352121 ssh_runner.go:195] Run: rm -f paused
	I1219 03:06:14.402992  352121 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:14.409994  352121 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	W1219 03:06:16.416124  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:17.411983  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:19.909949  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:18.416198  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:20.916272  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:21.910435  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:24.410080  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:23.415501  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:25.416077  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:27.916104  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:26.910790  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:29.410901  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:30.410731  350034 pod_ready.go:94] pod "coredns-66bc5c9577-8gphx" is "Ready"
	I1219 03:06:30.410767  350034 pod_ready.go:86] duration metric: took 26.00640693s for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.413681  350034 pod_ready.go:83] waiting for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.418650  350034 pod_ready.go:94] pod "etcd-embed-certs-805185" is "Ready"
	I1219 03:06:30.418674  350034 pod_ready.go:86] duration metric: took 4.958889ms for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.420801  350034 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.424880  350034 pod_ready.go:94] pod "kube-apiserver-embed-certs-805185" is "Ready"
	I1219 03:06:30.424905  350034 pod_ready.go:86] duration metric: took 4.082079ms for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.427435  350034 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.608611  350034 pod_ready.go:94] pod "kube-controller-manager-embed-certs-805185" is "Ready"
	I1219 03:06:30.608639  350034 pod_ready.go:86] duration metric: took 181.178673ms for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.808837  350034 pod_ready.go:83] waiting for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.208761  350034 pod_ready.go:94] pod "kube-proxy-p8pqg" is "Ready"
	I1219 03:06:31.208794  350034 pod_ready.go:86] duration metric: took 399.926497ms for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.408111  350034 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.808631  350034 pod_ready.go:94] pod "kube-scheduler-embed-certs-805185" is "Ready"
	I1219 03:06:31.808661  350034 pod_ready.go:86] duration metric: took 400.520634ms for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.808676  350034 pod_ready.go:40] duration metric: took 27.408945135s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:31.854031  350034 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:06:31.855666  350034 out.go:179] * Done! kubectl is now configured to use "embed-certs-805185" cluster and "default" namespace by default
	W1219 03:06:29.916349  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:31.916589  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:34.416790  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:36.915948  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	I1219 03:06:37.416133  352121 pod_ready.go:94] pod "coredns-66bc5c9577-dskxl" is "Ready"
	I1219 03:06:37.416166  352121 pod_ready.go:86] duration metric: took 23.006142155s for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.418804  352121 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.423037  352121 pod_ready.go:94] pod "etcd-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.423066  352121 pod_ready.go:86] duration metric: took 4.235839ms for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.425227  352121 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.429099  352121 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.429133  352121 pod_ready.go:86] duration metric: took 3.885497ms for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.430894  352121 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.614208  352121 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.614234  352121 pod_ready.go:86] duration metric: took 183.319695ms for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.814884  352121 pod_ready.go:83] waiting for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.214236  352121 pod_ready.go:94] pod "kube-proxy-mr7c8" is "Ready"
	I1219 03:06:38.214264  352121 pod_ready.go:86] duration metric: took 399.351737ms for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.413883  352121 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.813953  352121 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:38.813980  352121 pod_ready.go:86] duration metric: took 400.070957ms for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.813992  352121 pod_ready.go:40] duration metric: took 24.410965975s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:38.857548  352121 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:06:38.860155  352121 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-717222" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.736394898Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=717982ce-b0aa-47e4-97b9-7ccc9a3d471e name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.737528512Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=75c156d7-ee04-4c56-8f11-b0efea376553 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.737669801Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.742166616Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.742306458Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4ebf7307f813aac6a8d23d828e41e3d28400a9be6b5a1e96c9c7ac302e20c6f7/merged/etc/passwd: no such file or directory"
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.742328757Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4ebf7307f813aac6a8d23d828e41e3d28400a9be6b5a1e96c9c7ac302e20c6f7/merged/etc/group: no such file or directory"
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.742530495Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.773812294Z" level=info msg="Created container 7d6861325db2a3ac5ccac816c8e37a3daba04c6ecb4c268229d0eed2b47c364f: kube-system/storage-provisioner/storage-provisioner" id=75c156d7-ee04-4c56-8f11-b0efea376553 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.774507779Z" level=info msg="Starting container: 7d6861325db2a3ac5ccac816c8e37a3daba04c6ecb4c268229d0eed2b47c364f" id=d7a3d3c0-89c8-443d-926a-bf8921e3011d name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.776440067Z" level=info msg="Started container" PID=3331 containerID=7d6861325db2a3ac5ccac816c8e37a3daba04c6ecb4c268229d0eed2b47c364f description=kube-system/storage-provisioner/storage-provisioner id=d7a3d3c0-89c8-443d-926a-bf8921e3011d name=/runtime.v1.RuntimeService/StartContainer sandboxID=c464fbce01c73bc9002a59a55e969a9dcc96c829129ee9c487d0762b3a2a4169
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.362057944Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.366564465Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.366589659Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.366607882Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.370444341Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.370467276Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.370484152Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.374344046Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.374374846Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.374396298Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.378400072Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.378429166Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.378444369Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.382115308Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.382141451Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	7d6861325db2a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Running             storage-provisioner                    1                   c464fbce01c73       storage-provisioner                                     kube-system
	5935e257f3a09       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              18 minutes ago      Running             kubernetes-dashboard-auth              0                   d0d6b23f0e1dc       kubernetes-dashboard-auth-bf9cfccb5-mrw8q               kubernetes-dashboard
	29fec7f14635a       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   18 minutes ago      Running             kubernetes-dashboard-metrics-scraper   0                   0e0159aebbb3f       kubernetes-dashboard-metrics-scraper-594bbfb84b-ncxlk   kubernetes-dashboard
	94493b4e71313       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                           18 minutes ago      Running             proxy                                  0                   a29ecbc685444       kubernetes-dashboard-kong-78b7499b45-z266g              kubernetes-dashboard
	0c57b1705660a       docker.io/library/kong@sha256:73ac10ce4d2c5b3b8b4acd6c8117b4e72d1a201d95be2d51aeae8324d776a108                             18 minutes ago      Exited              clear-stale-pid                        0                   a29ecbc685444       kubernetes-dashboard-kong-78b7499b45-z266g              kubernetes-dashboard
	bba0b0d89d520       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               18 minutes ago      Running             kubernetes-dashboard-web               0                   8dedb4931ab92       kubernetes-dashboard-web-7f7574785f-h2jf5               kubernetes-dashboard
	d438e50bdc5cf       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               18 minutes ago      Running             kubernetes-dashboard-api               0                   2d9da507d045f       kubernetes-dashboard-api-c7898775-zhmv8                 kubernetes-dashboard
	88f8999e01d5b       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                           18 minutes ago      Running             coredns                                0                   192133b79d756       coredns-7d764666f9-vj7lm                                kube-system
	53f1be74e873d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Exited              storage-provisioner                    0                   c464fbce01c73       storage-provisioner                                     kube-system
	bf4ed13bede99       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                                           18 minutes ago      Running             busybox                                1                   1a93d07c85274       busybox                                                 default
	98dcabe770e7d       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                                           18 minutes ago      Running             kindnet-cni                            0                   c96cb5fa17a00       kindnet-xrp2s                                           kube-system
	757ccd2caa9cd       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                                           18 minutes ago      Running             kube-proxy                             0                   4e59b01d6de99       kube-proxy-g2gm4                                        kube-system
	5f148a7e487d8       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                                           18 minutes ago      Running             etcd                                   0                   03f900ecc7129       etcd-no-preload-278042                                  kube-system
	001407ac1b909       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                                           18 minutes ago      Running             kube-controller-manager                0                   d44cf856d1c8b       kube-controller-manager-no-preload-278042               kube-system
	973ccccab2576       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                                           18 minutes ago      Running             kube-scheduler                         0                   3f68017fcfb0f       kube-scheduler-no-preload-278042                        kube-system
	821b9cbc72eb6       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                                           18 minutes ago      Running             kube-apiserver                         0                   46991eb1a5abd       kube-apiserver-no-preload-278042                        kube-system
	
	
	==> coredns [88f8999e01d5bc23ebc968525542d039ae5c65ebd88f7ecad360345dc8277d94] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:57319 - 34037 "HINFO IN 3016703752619529984.3565104935656887276. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019206295s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-278042
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-278042
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=no-preload-278042
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_04_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:04:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-278042
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:23:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:23:53 +0000   Fri, 19 Dec 2025 03:04:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:23:53 +0000   Fri, 19 Dec 2025 03:04:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:23:53 +0000   Fri, 19 Dec 2025 03:04:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:23:53 +0000   Fri, 19 Dec 2025 03:04:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-278042
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                8fbc19b8-72f7-4938-83d9-fc3015dde7d1
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-7d764666f9-vj7lm                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-no-preload-278042                                   100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-xrp2s                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-no-preload-278042                         250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-no-preload-278042                200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-g2gm4                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-no-preload-278042                         100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        kubernetes-dashboard-api-c7898775-zhmv8                  100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-auth-bf9cfccb5-mrw8q                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-kong-78b7499b45-z266g               0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-594bbfb84b-ncxlk    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-web-7f7574785f-h2jf5                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1250m (15%)  1100m (13%)
	  memory             1020Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  19m   node-controller  Node no-preload-278042 event: Registered Node no-preload-278042 in Controller
	  Normal  RegisteredNode  18m   node-controller  Node no-preload-278042 event: Registered Node no-preload-278042 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [5f148a7e487d8dd3e55b516e027c5d508a26bdfe82dca079a9e980052c56ba2a] <==
	{"level":"info","ts":"2025-12-19T03:05:08.315130Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-19T03:05:08.988344Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-19T03:05:08.988403Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-19T03:05:08.988503Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-19T03:05:08.988524Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:05:08.988542Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-19T03:05:08.989244Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-19T03:05:08.989319Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:05:08.989346Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-19T03:05:08.989356Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-19T03:05:08.990632Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:no-preload-278042 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-19T03:05:08.990634Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:05:08.990681Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:05:08.990883Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T03:05:08.991615Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T03:05:08.992858Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:05:08.993684Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:05:09.001234Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T03:05:09.001416Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-19T03:15:09.026171Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":960}
	{"level":"info","ts":"2025-12-19T03:15:09.034559Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":960,"took":"7.955659ms","hash":4263527716,"current-db-size-bytes":3899392,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":3899392,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2025-12-19T03:15:09.034609Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4263527716,"revision":960,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:20:09.031768Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1204}
	{"level":"info","ts":"2025-12-19T03:20:09.034352Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1204,"took":"2.163711ms","hash":2275355149,"current-db-size-bytes":3899392,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":1998848,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-12-19T03:20:09.034391Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2275355149,"revision":1204,"compact-revision":960}
	
	
	==> kernel <==
	 03:23:58 up  1:06,  0 user,  load average: 0.55, 0.54, 1.14
	Linux no-preload-278042 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [98dcabe770e7dcf718bfbc7938b663e3dd19fd9ad86c2bd261a4099febad9b1b] <==
	I1219 03:21:51.360619       1 main.go:301] handling current node
	I1219 03:22:01.369216       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:01.369247       1 main.go:301] handling current node
	I1219 03:22:11.367736       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:11.367767       1 main.go:301] handling current node
	I1219 03:22:21.362127       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:21.362158       1 main.go:301] handling current node
	I1219 03:22:31.365054       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:31.365106       1 main.go:301] handling current node
	I1219 03:22:41.367805       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:41.367840       1 main.go:301] handling current node
	I1219 03:22:51.360347       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:51.360384       1 main.go:301] handling current node
	I1219 03:23:01.367434       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:23:01.367473       1 main.go:301] handling current node
	I1219 03:23:11.368784       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:23:11.368827       1 main.go:301] handling current node
	I1219 03:23:21.360067       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:23:21.360103       1 main.go:301] handling current node
	I1219 03:23:31.360833       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:23:31.360886       1 main.go:301] handling current node
	I1219 03:23:41.366471       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:23:41.366500       1 main.go:301] handling current node
	I1219 03:23:51.360858       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:23:51.360894       1 main.go:301] handling current node
	
	
	==> kube-apiserver [821b9cbc72eb687248584e5b24c8fee86a84d2d127bb4360eeb9e0bc0f2c67ec] <==
	W1219 03:05:13.385125       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.401923       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.413483       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.423560       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.434652       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.450356       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.470070       1 logging.go:55] [core] [Channel #283 SubChannel #284]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.481151       1 logging.go:55] [core] [Channel #287 SubChannel #288]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.492407       1 logging.go:55] [core] [Channel #291 SubChannel #292]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.503960       1 logging.go:55] [core] [Channel #295 SubChannel #296]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.519221       1 logging.go:55] [core] [Channel #299 SubChannel #300]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.528090       1 logging.go:55] [core] [Channel #303 SubChannel #304]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1219 03:05:13.711310       1 controller.go:667] quota admission added evaluator for: endpoints
	I1219 03:05:13.761392       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:05:13.862098       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:05:13.961908       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 03:05:15.702973       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 03:05:15.771287       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:05:15.776040       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:05:15.788145       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.102.118.21"}
	I1219 03:05:15.795336       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.103.152.147"}
	I1219 03:05:15.798838       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.54.162"}
	I1219 03:05:15.807348       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.97.173.60"}
	I1219 03:05:15.813204       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.106.235.156"}
	I1219 03:15:10.324126       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [001407ac1b90920309b7c79710e785dd82d392c6a8a43c444682d0837efd8cae] <==
	I1219 03:05:13.463362       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463414       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463386       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1219 03:05:13.463438       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463465       1 range_allocator.go:177] "Sending events to api server"
	I1219 03:05:13.463505       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1219 03:05:13.463516       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:05:13.463521       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463634       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463681       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463711       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464012       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464187       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464219       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464367       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464376       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464393       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464411       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.472055       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:05:14.564522       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:14.564546       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1219 03:05:14.564553       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:14.564553       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1219 03:05:14.572694       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:14.581900       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [757ccd2caa9cd35651079514b95b85a3612146f0d5b17fa735322d1e2ee036f1] <==
	I1219 03:05:11.015248       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:05:11.078140       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:05:11.178544       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:11.178579       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1219 03:05:11.178664       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:05:11.202324       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:05:11.202395       1 server_linux.go:136] "Using iptables Proxier"
	I1219 03:05:11.207676       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:05:11.208164       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 03:05:11.208215       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:11.212272       1 config.go:200] "Starting service config controller"
	I1219 03:05:11.212297       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:05:11.212328       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:05:11.212333       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:05:11.212401       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:05:11.212410       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:05:11.212604       1 config.go:309] "Starting node config controller"
	I1219 03:05:11.212646       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:05:11.212671       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:05:11.313219       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:05:11.313270       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:05:11.313557       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [973ccccab25764a0f9dd88e6a5d1d5565060554451f1dcc3dd0e68a9aed4c9f2] <==
	I1219 03:05:08.762319       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:05:10.311124       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:05:10.311291       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:05:10.311314       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:05:10.311345       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:05:10.339015       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1219 03:05:10.339346       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:10.343655       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:10.343694       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:05:10.345418       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:05:10.347040       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:05:10.447312       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 19 03:20:26 no-preload-278042 kubelet[713]: E1219 03:20:26.562657     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-278042" containerName="etcd"
	Dec 19 03:20:30 no-preload-278042 kubelet[713]: E1219 03:20:30.562630     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-z266g" containerName="proxy"
	Dec 19 03:20:30 no-preload-278042 kubelet[713]: E1219 03:20:30.562787     713 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-ncxlk" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:20:48 no-preload-278042 kubelet[713]: E1219 03:20:48.562287     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-278042" containerName="kube-scheduler"
	Dec 19 03:20:54 no-preload-278042 kubelet[713]: E1219 03:20:54.562796     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-278042" containerName="kube-apiserver"
	Dec 19 03:21:06 no-preload-278042 kubelet[713]: E1219 03:21:06.562680     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-278042" containerName="kube-controller-manager"
	Dec 19 03:21:11 no-preload-278042 kubelet[713]: E1219 03:21:11.563417     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vj7lm" containerName="coredns"
	Dec 19 03:21:28 no-preload-278042 kubelet[713]: E1219 03:21:28.562333     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-278042" containerName="etcd"
	Dec 19 03:21:31 no-preload-278042 kubelet[713]: E1219 03:21:31.563340     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-z266g" containerName="proxy"
	Dec 19 03:21:53 no-preload-278042 kubelet[713]: E1219 03:21:53.563344     713 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-ncxlk" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:22:03 no-preload-278042 kubelet[713]: E1219 03:22:03.563479     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-278042" containerName="kube-scheduler"
	Dec 19 03:22:18 no-preload-278042 kubelet[713]: E1219 03:22:18.562844     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-278042" containerName="kube-apiserver"
	Dec 19 03:22:26 no-preload-278042 kubelet[713]: E1219 03:22:26.562406     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-278042" containerName="kube-controller-manager"
	Dec 19 03:22:37 no-preload-278042 kubelet[713]: E1219 03:22:37.563042     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vj7lm" containerName="coredns"
	Dec 19 03:22:49 no-preload-278042 kubelet[713]: E1219 03:22:49.563063     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-z266g" containerName="proxy"
	Dec 19 03:22:54 no-preload-278042 kubelet[713]: E1219 03:22:54.563196     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-278042" containerName="etcd"
	Dec 19 03:23:18 no-preload-278042 kubelet[713]: E1219 03:23:18.563266     713 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-ncxlk" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:23:26 no-preload-278042 kubelet[713]: E1219 03:23:26.562431     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-278042" containerName="kube-scheduler"
	Dec 19 03:23:48 no-preload-278042 kubelet[713]: E1219 03:23:48.562683     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-278042" containerName="kube-apiserver"
	Dec 19 03:23:49 no-preload-278042 kubelet[713]: E1219 03:23:49.562573     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vj7lm" containerName="coredns"
	Dec 19 03:23:51 no-preload-278042 kubelet[713]: E1219 03:23:51.563334     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-278042" containerName="kube-controller-manager"
	Dec 19 03:23:55 no-preload-278042 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 19 03:23:55 no-preload-278042 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 19 03:23:55 no-preload-278042 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 19 03:23:55 no-preload-278042 systemd[1]: kubelet.service: Consumed 25.087s CPU time.
	
	
	==> kubernetes-dashboard [29fec7f14635a794f200efe276e62a0fc3151ea3d427cb21da297c53114fd8b9] <==
	E1219 03:21:25.195161       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:22:25.195098       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:23:25.194956       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	10.244.0.1 - - [19/Dec/2025:03:21:17 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:21:25 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:21:35 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:21:45 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:21:47 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:21:55 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:22:05 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:22:15 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:22:17 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:25 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:22:35 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:22:45 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:22:47 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:55 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:23:05 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:23:15 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:23:17 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:25 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:23:35 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:23:45 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:23:47 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:55 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	
	
	==> kubernetes-dashboard [5935e257f3a0964bf239f408c9308c3b84961c75f06f32b2fda50133fe1ddbbd] <==
	I1219 03:05:26.300513       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:05:26.300578       1 init.go:49] Using in-cluster config
	I1219 03:05:26.300723       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [bba0b0d89d520cc6ca6a07611a31a2778cb1e41e66784ac255b63f970adcffb7] <==
	I1219 03:05:19.397607       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:05:19.397662       1 init.go:48] Using in-cluster config
	I1219 03:05:19.397903       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [d438e50bdc5cf86c6ad101cf6a3ca9c6c7091524bb7ffd95705de1d1a5ed8994] <==
	I1219 03:05:17.224225       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:05:17.224299       1 init.go:49] Using in-cluster config
	I1219 03:05:17.224498       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:05:17.224512       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:05:17.224518       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:05:17.224524       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:05:17.230241       1 main.go:119] "Successful initial request to the apiserver" version="v1.35.0-rc.1"
	I1219 03:05:17.230266       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:05:17.233542       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 03:05:17.236374       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	I1219 03:05:47.240946       1 manager.go:101] Successful request to sidecar
	
	
	==> storage-provisioner [53f1be74e873df0c32c600b228ba909dde859aa38c23f9a71f536c90aa4e096f] <==
	I1219 03:05:10.950483       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:05:40.952323       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7d6861325db2a3ac5ccac816c8e37a3daba04c6ecb4c268229d0eed2b47c364f] <==
	W1219 03:23:33.361807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:35.365216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:35.369382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:37.372268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:37.377522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:39.380227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:39.384143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:41.387062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:41.392658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:43.395737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:43.399497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:45.402464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:45.406320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:47.409103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:47.413050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:49.416534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:49.421078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:51.424048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:51.427837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:53.432223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:53.437004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:55.440893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:55.444727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:57.448825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:57.453580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-278042 -n no-preload-278042
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-278042 -n no-preload-278042: exit status 2 (337.151007ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-278042 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-278042
helpers_test.go:244: (dbg) docker inspect no-preload-278042:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35",
	        "Created": "2025-12-19T03:03:43.244016686Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 339111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:05:01.069592419Z",
	            "FinishedAt": "2025-12-19T03:05:00.08601805Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35/hostname",
	        "HostsPath": "/var/lib/docker/containers/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35/hosts",
	        "LogPath": "/var/lib/docker/containers/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35/c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35-json.log",
	        "Name": "/no-preload-278042",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-278042:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-278042",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c49a965a7d8dc5e56c2d8d5130755c54d478e08514892d9a46448cdf90c3ea35",
	                "LowerDir": "/var/lib/docker/overlay2/1fa6da707f89a44d8d74b00c5a1e05754941243c2c7fe2df12c57972e765ab6a-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1fa6da707f89a44d8d74b00c5a1e05754941243c2c7fe2df12c57972e765ab6a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1fa6da707f89a44d8d74b00c5a1e05754941243c2c7fe2df12c57972e765ab6a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1fa6da707f89a44d8d74b00c5a1e05754941243c2c7fe2df12c57972e765ab6a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-278042",
	                "Source": "/var/lib/docker/volumes/no-preload-278042/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-278042",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-278042",
	                "name.minikube.sigs.k8s.io": "no-preload-278042",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "86d771358686193a8ee27ccd7dd8113a32471ee83b7a9b27de2361ca35da19bf",
	            "SandboxKey": "/var/run/docker/netns/86d771358686",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-278042": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "40e663ebb9c92fe8e9b5d1c06f073100d83df79efa76e295e52399b291babbbc",
	                    "EndpointID": "8aa1f1b0831c873e8bd4b8eb538f83b636c1962501683e75418947d1eb28c78e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "7e:f0:a4:c4:bd:57",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-278042",
	                        "c49a965a7d8d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-278042 -n no-preload-278042
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-278042 -n no-preload-278042: exit status 2 (334.831135ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-278042 logs -n 25
E1219 03:23:59.336782    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/calico-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-278042 logs -n 25: (1.326408256s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p custom-flannel-821749                                                                                                                                                                                                                      │ custom-flannel-821749        │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-433330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-278042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │                     │
	│ delete  │ -p disable-driver-mounts-507648                                                                                                                                                                                                               │ disable-driver-mounts-507648 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ stop    │ -p old-k8s-version-433330 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-278042 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-433330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-433330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p no-preload-278042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-278042 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p embed-certs-805185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p embed-certs-805185 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-717222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-717222 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-805185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-717222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ image   │ old-k8s-version-433330 image list --format=json                                                                                                                                                                                               │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p old-k8s-version-433330 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ image   │ no-preload-278042 image list --format=json                                                                                                                                                                                                    │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p no-preload-278042 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ delete  │ -p old-k8s-version-433330                                                                                                                                                                                                                     │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:05:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:05:53.092301  352121 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:05:53.092394  352121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:53.092398  352121 out.go:374] Setting ErrFile to fd 2...
	I1219 03:05:53.092402  352121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:05:53.092674  352121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:05:53.093206  352121 out.go:368] Setting JSON to false
	I1219 03:05:53.094527  352121 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2904,"bootTime":1766110649,"procs":396,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:05:53.094626  352121 start.go:143] virtualization: kvm guest
	I1219 03:05:53.096521  352121 out.go:179] * [default-k8s-diff-port-717222] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:05:53.097989  352121 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:05:53.098033  352121 notify.go:221] Checking for updates...
	I1219 03:05:53.100179  352121 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:05:53.101370  352121 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:05:53.102535  352121 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:05:53.104239  352121 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:05:53.105473  352121 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:05:53.107110  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:53.107760  352121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:05:53.140217  352121 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:05:53.140357  352121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:53.211137  352121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 03:05:53.198937136 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:53.211231  352121 docker.go:319] overlay module found
	I1219 03:05:53.212919  352121 out.go:179] * Using the docker driver based on existing profile
	I1219 03:05:53.214070  352121 start.go:309] selected driver: docker
	I1219 03:05:53.214085  352121 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:53.214190  352121 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:05:53.214819  352121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:05:53.291099  352121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-19 03:05:53.277402722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:05:53.291489  352121 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:53.291541  352121 cni.go:84] Creating CNI manager for ""
	I1219 03:05:53.291616  352121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:53.291670  352121 start.go:353] cluster config:
	{Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:05:53.293313  352121 out.go:179] * Starting "default-k8s-diff-port-717222" primary control-plane node in "default-k8s-diff-port-717222" cluster
	I1219 03:05:53.300833  352121 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:05:53.301994  352121 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:05:53.303248  352121 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:05:53.303295  352121 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1219 03:05:53.303311  352121 cache.go:65] Caching tarball of preloaded images
	I1219 03:05:53.303351  352121 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:05:53.303428  352121 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:05:53.303442  352121 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1219 03:05:53.303571  352121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json ...
	I1219 03:05:53.330621  352121 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:05:53.330652  352121 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:05:53.330671  352121 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:05:53.330767  352121 start.go:360] acquireMachinesLock for default-k8s-diff-port-717222: {Name:mk586c44d14a94c58ce6de7426a02f7319eb0716 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:05:53.330860  352121 start.go:364] duration metric: took 49.392µs to acquireMachinesLock for "default-k8s-diff-port-717222"
	I1219 03:05:53.330886  352121 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:05:53.330893  352121 fix.go:54] fixHost starting: 
	I1219 03:05:53.331229  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:53.353305  352121 fix.go:112] recreateIfNeeded on default-k8s-diff-port-717222: state=Stopped err=<nil>
	W1219 03:05:53.353340  352121 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:05:52.784975  350034 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:52.785039  350034 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:05:52.785129  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.788781  350034 addons.go:239] Setting addon default-storageclass=true in "embed-certs-805185"
	W1219 03:05:52.788860  350034 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:05:52.788932  350034 host.go:66] Checking if "embed-certs-805185" exists ...
	I1219 03:05:52.789603  350034 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:52.803693  350034 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:52.803884  350034 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:05:52.803963  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.829487  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.833372  350034 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:52.833419  350034 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:05:52.833502  350034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:05:52.838747  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.863871  350034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:05:52.921723  350034 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:05:52.937553  350034 node_ready.go:35] waiting up to 6m0s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:52.960476  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:05:52.967329  350034 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:05:52.987994  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:05:54.352944  350034 node_ready.go:49] node "embed-certs-805185" is "Ready"
	I1219 03:05:54.352990  350034 node_ready.go:38] duration metric: took 1.41538432s for node "embed-certs-805185" to be "Ready" ...
	I1219 03:05:54.353006  350034 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:05:54.353065  350034 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:05:54.864404  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.903892508s)
	I1219 03:05:54.864464  350034 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.897096827s)
	I1219 03:05:54.864523  350034 api_server.go:72] duration metric: took 2.115189515s to wait for apiserver process to appear ...
	I1219 03:05:54.864541  350034 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:05:54.864542  350034 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:05:54.864474  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.876456755s)
	I1219 03:05:54.864569  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:54.868475  350034 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:05:54.869335  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:54.869359  350034 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:55.365237  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:55.371297  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:05:55.371332  350034 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:05:53.354901  352121 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-717222" ...
	I1219 03:05:53.354983  352121 cli_runner.go:164] Run: docker start default-k8s-diff-port-717222
	I1219 03:05:53.699823  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:05:53.723885  352121 kic.go:430] container "default-k8s-diff-port-717222" state is running.
	I1219 03:05:53.724437  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:53.748911  352121 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/config.json ...
	I1219 03:05:53.749250  352121 machine.go:94] provisionDockerMachine start ...
	I1219 03:05:53.749328  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:53.779816  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:53.780159  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:53.780174  352121 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:05:53.780830  352121 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1219 03:05:56.928761  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-717222
	
	I1219 03:05:56.928788  352121 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-717222"
	I1219 03:05:56.928865  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:56.949113  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:56.949333  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:56.949347  352121 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-717222 && echo "default-k8s-diff-port-717222" | sudo tee /etc/hostname
	I1219 03:05:57.104743  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-717222
	
	I1219 03:05:57.104815  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.123728  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:57.124049  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:57.124072  352121 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-717222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-717222/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-717222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:05:57.273322  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:05:57.273360  352121 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 03:05:57.273404  352121 ubuntu.go:190] setting up certificates
	I1219 03:05:57.273418  352121 provision.go:84] configureAuth start
	I1219 03:05:57.273481  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:57.295306  352121 provision.go:143] copyHostCerts
	I1219 03:05:57.295369  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem, removing ...
	I1219 03:05:57.295386  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem
	I1219 03:05:57.295456  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 03:05:57.295579  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem, removing ...
	I1219 03:05:57.295589  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem
	I1219 03:05:57.295628  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 03:05:57.295782  352121 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem, removing ...
	I1219 03:05:57.295804  352121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem
	I1219 03:05:57.295852  352121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 03:05:57.295936  352121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-717222 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-717222 localhost minikube]
	I1219 03:05:57.394982  352121 provision.go:177] copyRemoteCerts
	I1219 03:05:57.395055  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:05:57.395092  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.415801  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:57.518597  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:05:57.537101  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:05:57.555959  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:05:57.575813  352121 provision.go:87] duration metric: took 302.381293ms to configureAuth
	I1219 03:05:57.575844  352121 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:05:57.576048  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:05:57.576149  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:57.595953  352121 main.go:144] libmachine: Using SSH client type: native
	I1219 03:05:57.596182  352121 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1219 03:05:57.596200  352121 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:05:58.066496  352121 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:05:58.066522  352121 machine.go:97] duration metric: took 4.317251418s to provisionDockerMachine
	I1219 03:05:58.066537  352121 start.go:293] postStartSetup for "default-k8s-diff-port-717222" (driver="docker")
	I1219 03:05:58.066550  352121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:05:58.066636  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:05:58.066688  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.089735  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:55.753357  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:05:55.865655  350034 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1219 03:05:55.871156  350034 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1219 03:05:55.872281  350034 api_server.go:141] control plane version: v1.34.3
	I1219 03:05:55.872314  350034 api_server.go:131] duration metric: took 1.007762314s to wait for apiserver health ...
	I1219 03:05:55.872322  350034 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:05:55.876404  350034 system_pods.go:59] 8 kube-system pods found
	I1219 03:05:55.876440  350034 system_pods.go:61] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:55.876448  350034 system_pods.go:61] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:55.876455  350034 system_pods.go:61] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:55.876464  350034 system_pods.go:61] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:55.876469  350034 system_pods.go:61] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:55.876474  350034 system_pods.go:61] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:55.876484  350034 system_pods.go:61] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:55.876491  350034 system_pods.go:61] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:55.876501  350034 system_pods.go:74] duration metric: took 4.173475ms to wait for pod list to return data ...
	I1219 03:05:55.876508  350034 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:05:55.879480  350034 default_sa.go:45] found service account: "default"
	I1219 03:05:55.879505  350034 default_sa.go:55] duration metric: took 2.991473ms for default service account to be created ...
	I1219 03:05:55.879514  350034 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:05:55.882762  350034 system_pods.go:86] 8 kube-system pods found
	I1219 03:05:55.882799  350034 system_pods.go:89] "coredns-66bc5c9577-8gphx" [4ddec921-4727-4a79-b09e-05dfa120cad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:05:55.882811  350034 system_pods.go:89] "etcd-embed-certs-805185" [02779763-12d3-4270-8949-35ef38129242] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:05:55.882822  350034 system_pods.go:89] "kindnet-jj9ms" [d0e51745-1c64-48ae-b569-6a0f1017cc8d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:05:55.882831  350034 system_pods.go:89] "kube-apiserver-embed-certs-805185" [e2618807-8983-4686-80ed-fdb6dcf39877] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:05:55.882844  350034 system_pods.go:89] "kube-controller-manager-embed-certs-805185" [cedaeea3-115c-4a53-94f3-bc035afde37f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:05:55.882860  350034 system_pods.go:89] "kube-proxy-p8pqg" [0bbe467b-7501-4a75-93bb-b1c33a1da403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:05:55.882873  350034 system_pods.go:89] "kube-scheduler-embed-certs-805185" [76561533-9142-4907-b7dd-4cf8149a73c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:05:55.882885  350034 system_pods.go:89] "storage-provisioner" [a7c7ec9b-ed70-43dd-aa6c-c365da9d4588] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1219 03:05:55.882898  350034 system_pods.go:126] duration metric: took 3.377076ms to wait for k8s-apps to be running ...
	I1219 03:05:55.882911  350034 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:05:55.882965  350034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:05:58.664447  350034 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (2.910960851s)
	I1219 03:05:58.664545  350034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:05:58.664639  350034 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.781656633s)
	I1219 03:05:58.664656  350034 system_svc.go:56] duration metric: took 2.781742979s WaitForService to wait for kubelet
	I1219 03:05:58.664665  350034 kubeadm.go:587] duration metric: took 5.915331625s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:05:58.664687  350034 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:05:58.675538  350034 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:05:58.675644  350034 node_conditions.go:123] node cpu capacity is 8
	I1219 03:05:58.675679  350034 node_conditions.go:105] duration metric: took 10.98619ms to run NodePressure ...
	I1219 03:05:58.675732  350034 start.go:242] waiting for startup goroutines ...
	I1219 03:05:58.853716  350034 addons.go:500] Verifying addon dashboard=true in "embed-certs-805185"
	I1219 03:05:58.854075  350034 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:05:58.881488  350034 out.go:179] * Verifying dashboard addon...
	I1219 03:05:58.197271  352121 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:05:58.201133  352121 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:05:58.201178  352121 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:05:58.201190  352121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 03:05:58.201247  352121 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 03:05:58.201317  352121 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem -> 85362.pem in /etc/ssl/certs
	I1219 03:05:58.201402  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:05:58.208949  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:05:58.226900  352121 start.go:296] duration metric: took 160.349121ms for postStartSetup
	I1219 03:05:58.227004  352121 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:05:58.227051  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.247047  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.350362  352121 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:05:58.356036  352121 fix.go:56] duration metric: took 5.025136862s for fixHost
	I1219 03:05:58.356065  352121 start.go:83] releasing machines lock for "default-k8s-diff-port-717222", held for 5.025188602s
	I1219 03:05:58.356141  352121 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-717222
	I1219 03:05:58.375473  352121 ssh_runner.go:195] Run: cat /version.json
	I1219 03:05:58.375532  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.375568  352121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:05:58.375641  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:05:58.397142  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.397516  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:05:58.556811  352121 ssh_runner.go:195] Run: systemctl --version
	I1219 03:05:58.565677  352121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:05:58.614666  352121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:05:58.621187  352121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:05:58.621270  352121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:05:58.633014  352121 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:05:58.633047  352121 start.go:496] detecting cgroup driver to use...
	I1219 03:05:58.633088  352121 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:05:58.633137  352121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:05:58.657800  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:05:58.678804  352121 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:05:58.678883  352121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:05:58.705625  352121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:05:58.728926  352121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:05:58.831932  352121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:05:58.929486  352121 docker.go:234] disabling docker service ...
	I1219 03:05:58.929541  352121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:05:58.944770  352121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:05:58.958306  352121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:05:59.062259  352121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:05:59.150570  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:05:59.163650  352121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:05:59.178297  352121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:05:59.178363  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.187297  352121 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 03:05:59.187364  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.198433  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.208772  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.217632  352121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:05:59.226347  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.236018  352121 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.245884  352121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:05:59.255295  352121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:05:59.263398  352121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:05:59.271671  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:05:59.372273  352121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:05:59.546574  352121 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:05:59.546667  352121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:05:59.552176  352121 start.go:564] Will wait 60s for crictl version
	I1219 03:05:59.552257  352121 ssh_runner.go:195] Run: which crictl
	I1219 03:05:59.557121  352121 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:05:59.591016  352121 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 03:05:59.591106  352121 ssh_runner.go:195] Run: crio --version
	I1219 03:05:59.628329  352121 ssh_runner.go:195] Run: crio --version
	I1219 03:05:59.666833  352121 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1219 03:05:58.884349  350034 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:05:58.887370  350034 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:05:58.887396  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.389654  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.888847  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:00.388430  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:05:59.668307  352121 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-717222 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:05:59.692145  352121 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1219 03:05:59.697760  352121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:59.712259  352121 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:05:59.712414  352121 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1219 03:05:59.712469  352121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:59.758952  352121 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:59.758977  352121 crio.go:433] Images already preloaded, skipping extraction
	I1219 03:05:59.759041  352121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:05:59.795008  352121 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:05:59.795037  352121 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:05:59.795046  352121 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.3 crio true true} ...
	I1219 03:05:59.795178  352121 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-717222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:05:59.795259  352121 ssh_runner.go:195] Run: crio config
	I1219 03:05:59.864102  352121 cni.go:84] Creating CNI manager for ""
	I1219 03:05:59.864128  352121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:05:59.864150  352121 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:05:59.864179  352121 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-717222 NodeName:default-k8s-diff-port-717222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:05:59.864362  352121 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-717222"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:05:59.864462  352121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:05:59.875851  352121 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:05:59.875922  352121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:05:59.884464  352121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1219 03:05:59.899134  352121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:05:59.915215  352121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1219 03:05:59.929131  352121 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:05:59.933193  352121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:05:59.944310  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:06:00.032535  352121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:06:00.050232  352121 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222 for IP: 192.168.94.2
	I1219 03:06:00.050255  352121 certs.go:195] generating shared ca certs ...
	I1219 03:06:00.050283  352121 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.050682  352121 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 03:06:00.050862  352121 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 03:06:00.050911  352121 certs.go:257] generating profile certs ...
	I1219 03:06:00.051065  352121 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/client.key
	I1219 03:06:00.051146  352121 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.key.a5493f25
	I1219 03:06:00.051208  352121 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.key
	I1219 03:06:00.051349  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem (1338 bytes)
	W1219 03:06:00.051384  352121 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536_empty.pem, impossibly tiny 0 bytes
	I1219 03:06:00.051393  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:06:00.051430  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:06:00.051457  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:06:00.051489  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 03:06:00.051550  352121 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:06:00.053211  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:06:00.075839  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:06:00.100173  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:06:00.124618  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 03:06:00.149956  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:06:00.170353  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:06:00.189275  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:06:00.209284  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/default-k8s-diff-port-717222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 03:06:00.228628  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:06:00.250172  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem --> /usr/share/ca-certificates/8536.pem (1338 bytes)
	I1219 03:06:00.275836  352121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /usr/share/ca-certificates/85362.pem (1708 bytes)
	I1219 03:06:00.299659  352121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:06:00.316102  352121 ssh_runner.go:195] Run: openssl version
	I1219 03:06:00.322905  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.332781  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:06:00.343197  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.348016  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.348075  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:06:00.393419  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:06:00.402299  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.411729  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8536.pem /etc/ssl/certs/8536.pem
	I1219 03:06:00.420351  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.424769  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:33 /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.424832  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8536.pem
	I1219 03:06:00.477146  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:06:00.485938  352121 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.495504  352121 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85362.pem /etc/ssl/certs/85362.pem
	I1219 03:06:00.504981  352121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.509807  352121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:33 /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.509870  352121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85362.pem
	I1219 03:06:00.553357  352121 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:06:00.563012  352121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:06:00.568396  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:06:00.622821  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:06:00.682127  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:06:00.745366  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:06:00.806418  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:06:00.854164  352121 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:06:00.896520  352121 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-717222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-717222 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:06:00.896640  352121 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:06:00.896757  352121 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:06:00.932488  352121 cri.go:92] found id: "1340a2f59347d94cb67897c14b11276bb491943caa3b94ed7080db25e5d5dc78"
	I1219 03:06:00.932513  352121 cri.go:92] found id: "725faee3812c5d3136b764571b2c9b270517680beb590d21de31aad0b5d0b89a"
	I1219 03:06:00.932519  352121 cri.go:92] found id: "d2c496c53c69616e93f161ca59c69ebaa3d4de81e94460893270a29e321886eb"
	I1219 03:06:00.932523  352121 cri.go:92] found id: "0fb4e8910a64fc9fe34f2319ccb58566695768a8c137e01b391e3004481cb992"
	I1219 03:06:00.932538  352121 cri.go:92] found id: ""
	I1219 03:06:00.932585  352121 ssh_runner.go:195] Run: sudo runc list -f json
	W1219 03:06:00.949831  352121 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:06:00Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:06:00.949907  352121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:06:00.960453  352121 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:06:00.960473  352121 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:06:00.960520  352121 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:06:00.969747  352121 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:06:00.971295  352121 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-717222" does not appear in /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:06:00.972255  352121 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-4987/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-717222" cluster setting kubeconfig missing "default-k8s-diff-port-717222" context setting]
	I1219 03:06:00.973245  352121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.975092  352121 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:06:00.986333  352121 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1219 03:06:00.986370  352121 kubeadm.go:602] duration metric: took 25.891084ms to restartPrimaryControlPlane
	I1219 03:06:00.986381  352121 kubeadm.go:403] duration metric: took 89.870495ms to StartCluster
	I1219 03:06:00.986399  352121 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.986465  352121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:06:00.988984  352121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:06:00.989885  352121 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:06:00.990020  352121 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:06:00.990126  352121 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990162  352121 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-717222"
	W1219 03:06:00.990171  352121 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:06:00.990156  352121 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990202  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:00.990217  352121 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-717222"
	W1219 03:06:00.990229  352121 addons.go:248] addon dashboard should already be in state true
	I1219 03:06:00.990249  352121 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-717222"
	I1219 03:06:00.990136  352121 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:06:00.990260  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:00.990288  352121 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-717222"
	I1219 03:06:00.990658  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.990728  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.990961  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:00.993354  352121 out.go:179] * Verifying Kubernetes components...
	I1219 03:06:00.994888  352121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:06:01.021617  352121 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:06:01.021640  352121 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:06:01.021711  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.025911  352121 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:06:01.027271  352121 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:06:01.027318  352121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:06:01.027479  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.027961  352121 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-717222"
	W1219 03:06:01.027983  352121 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:06:01.028009  352121 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:06:01.028455  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:01.058853  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.066205  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.072839  352121 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:06:01.072864  352121 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:06:01.072921  352121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:06:01.100032  352121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:06:01.171523  352121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:06:01.187709  352121 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:06:01.188379  352121 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:06:01.193383  352121 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:06:01.197836  352121 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:06:01.203342  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:06:01.235359  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:06:02.386268  352121 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.188390942s)
	I1219 03:06:02.386368  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:06:03.037231  352121 node_ready.go:49] node "default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:03.037269  352121 node_ready.go:38] duration metric: took 1.848860443s for node "default-k8s-diff-port-717222" to be "Ready" ...
	I1219 03:06:03.037285  352121 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:06:03.037340  352121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:06:00.888463  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.390478  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:01.889172  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.392135  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:02.887803  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.395356  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:03.887597  350034 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:04.388564  350034 kapi.go:107] duration metric: took 5.504214625s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:06:04.390500  350034 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-805185 addons enable metrics-server
	
	I1219 03:06:04.392729  350034 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1219 03:06:04.394312  350034 addons.go:546] duration metric: took 11.644922855s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:06:04.394358  350034 start.go:247] waiting for cluster config update ...
	I1219 03:06:04.394374  350034 start.go:256] writing updated cluster config ...
	I1219 03:06:04.394679  350034 ssh_runner.go:195] Run: rm -f paused
	I1219 03:06:04.399689  350034 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:04.404337  350034 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:03.829353  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.625958762s)
	I1219 03:06:03.829435  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.59404762s)
	I1219 03:06:06.147413  352121 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.761002493s)
	I1219 03:06:06.147503  352121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:06:06.147635  352121 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.110275887s)
	I1219 03:06:06.147788  352121 api_server.go:72] duration metric: took 5.157862117s to wait for apiserver process to appear ...
	I1219 03:06:06.147803  352121 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:06:06.147822  352121 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1219 03:06:06.158489  352121 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1219 03:06:06.160663  352121 api_server.go:141] control plane version: v1.34.3
	I1219 03:06:06.160691  352121 api_server.go:131] duration metric: took 12.881794ms to wait for apiserver health ...
	I1219 03:06:06.160726  352121 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:06:06.167594  352121 system_pods.go:59] 8 kube-system pods found
	I1219 03:06:06.167645  352121 system_pods.go:61] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:06:06.167662  352121 system_pods.go:61] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:06:06.167671  352121 system_pods.go:61] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:06:06.167680  352121 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:06:06.167689  352121 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:06:06.167695  352121 system_pods.go:61] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:06:06.167733  352121 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:06:06.167740  352121 system_pods.go:61] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:06:06.167750  352121 system_pods.go:74] duration metric: took 7.015044ms to wait for pod list to return data ...
	I1219 03:06:06.167763  352121 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:06:06.172355  352121 default_sa.go:45] found service account: "default"
	I1219 03:06:06.172383  352121 default_sa.go:55] duration metric: took 4.612941ms for default service account to be created ...
	I1219 03:06:06.172394  352121 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:06:06.266672  352121 system_pods.go:86] 8 kube-system pods found
	I1219 03:06:06.266726  352121 system_pods.go:89] "coredns-66bc5c9577-dskxl" [6e82652d-1118-425b-9dc8-2a0cc50bbb7b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:06:06.266739  352121 system_pods.go:89] "etcd-default-k8s-diff-port-717222" [3284f6a2-4456-4403-a844-268ee749b8a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:06:06.266748  352121 system_pods.go:89] "kindnet-zgcrn" [9b82b4fb-dce9-4bca-890c-0d0d9b7fc92a] Running
	I1219 03:06:06.266766  352121 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-717222" [5a199903-90e2-4725-9347-0814373549da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:06:06.266780  352121 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-717222" [d337119f-2c6b-44ca-98af-e3ddadf1cbe6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:06:06.266794  352121 system_pods.go:89] "kube-proxy-mr7c8" [c6d5e13e-bf1d-4f00-8d1d-0711294f20f7] Running
	I1219 03:06:06.266807  352121 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-717222" [63fc3f7b-63e4-496e-8036-aaa9d4f0d841] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:06:06.266814  352121 system_pods.go:89] "storage-provisioner" [38c2d00a-9d6a-43a7-b9d5-f690dac30c87] Running
	I1219 03:06:06.266828  352121 system_pods.go:126] duration metric: took 94.426368ms to wait for k8s-apps to be running ...
	I1219 03:06:06.266837  352121 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:06:06.266897  352121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:06:06.349488  352121 system_svc.go:56] duration metric: took 82.641061ms WaitForService to wait for kubelet
	I1219 03:06:06.349525  352121 kubeadm.go:587] duration metric: took 5.359602561s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:06:06.349549  352121 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:06:06.350103  352121 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-717222"
	I1219 03:06:06.350491  352121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:06:06.365789  352121 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:06:06.365826  352121 node_conditions.go:123] node cpu capacity is 8
	I1219 03:06:06.365844  352121 node_conditions.go:105] duration metric: took 16.288389ms to run NodePressure ...
	I1219 03:06:06.365861  352121 start.go:242] waiting for startup goroutines ...
	I1219 03:06:06.386126  352121 out.go:179] * Verifying dashboard addon...
	I1219 03:06:06.389114  352121 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:06:06.393439  352121 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:06:07.395667  352121 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:06:07.395691  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:07.895613  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	W1219 03:06:06.411012  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:08.413428  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:08.393607  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:08.894323  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.395092  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:09.892933  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.393259  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:10.894360  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.394632  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:11.893492  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.393217  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:12.892698  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.423636  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:13.893633  352121 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:06:14.392860  352121 kapi.go:107] duration metric: took 8.003743298s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:06:14.394678  352121 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-717222 addons enable metrics-server
	
	I1219 03:06:14.396056  352121 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1219 03:06:10.911668  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:13.410917  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:15.411844  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:14.397218  352121 addons.go:546] duration metric: took 13.407198611s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:06:14.397266  352121 start.go:247] waiting for cluster config update ...
	I1219 03:06:14.397281  352121 start.go:256] writing updated cluster config ...
	I1219 03:06:14.397606  352121 ssh_runner.go:195] Run: rm -f paused
	I1219 03:06:14.402992  352121 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:14.409994  352121 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	W1219 03:06:16.416124  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:17.411983  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:19.909949  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:18.416198  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:20.916272  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:21.910435  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:24.410080  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:23.415501  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:25.416077  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:27.916104  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:26.910790  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	W1219 03:06:29.410901  350034 pod_ready.go:104] pod "coredns-66bc5c9577-8gphx" is not "Ready", error: <nil>
	I1219 03:06:30.410731  350034 pod_ready.go:94] pod "coredns-66bc5c9577-8gphx" is "Ready"
	I1219 03:06:30.410767  350034 pod_ready.go:86] duration metric: took 26.00640693s for pod "coredns-66bc5c9577-8gphx" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.413681  350034 pod_ready.go:83] waiting for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.418650  350034 pod_ready.go:94] pod "etcd-embed-certs-805185" is "Ready"
	I1219 03:06:30.418674  350034 pod_ready.go:86] duration metric: took 4.958889ms for pod "etcd-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.420801  350034 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.424880  350034 pod_ready.go:94] pod "kube-apiserver-embed-certs-805185" is "Ready"
	I1219 03:06:30.424905  350034 pod_ready.go:86] duration metric: took 4.082079ms for pod "kube-apiserver-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.427435  350034 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.608611  350034 pod_ready.go:94] pod "kube-controller-manager-embed-certs-805185" is "Ready"
	I1219 03:06:30.608639  350034 pod_ready.go:86] duration metric: took 181.178673ms for pod "kube-controller-manager-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:30.808837  350034 pod_ready.go:83] waiting for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.208761  350034 pod_ready.go:94] pod "kube-proxy-p8pqg" is "Ready"
	I1219 03:06:31.208794  350034 pod_ready.go:86] duration metric: took 399.926497ms for pod "kube-proxy-p8pqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.408111  350034 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.808631  350034 pod_ready.go:94] pod "kube-scheduler-embed-certs-805185" is "Ready"
	I1219 03:06:31.808661  350034 pod_ready.go:86] duration metric: took 400.520634ms for pod "kube-scheduler-embed-certs-805185" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:31.808676  350034 pod_ready.go:40] duration metric: took 27.408945135s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:31.854031  350034 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:06:31.855666  350034 out.go:179] * Done! kubectl is now configured to use "embed-certs-805185" cluster and "default" namespace by default
	W1219 03:06:29.916349  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:31.916589  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:34.416790  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	W1219 03:06:36.915948  352121 pod_ready.go:104] pod "coredns-66bc5c9577-dskxl" is not "Ready", error: <nil>
	I1219 03:06:37.416133  352121 pod_ready.go:94] pod "coredns-66bc5c9577-dskxl" is "Ready"
	I1219 03:06:37.416166  352121 pod_ready.go:86] duration metric: took 23.006142155s for pod "coredns-66bc5c9577-dskxl" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.418804  352121 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.423037  352121 pod_ready.go:94] pod "etcd-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.423066  352121 pod_ready.go:86] duration metric: took 4.235839ms for pod "etcd-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.425227  352121 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.429099  352121 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.429133  352121 pod_ready.go:86] duration metric: took 3.885497ms for pod "kube-apiserver-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.430894  352121 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.614208  352121 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:37.614234  352121 pod_ready.go:86] duration metric: took 183.319695ms for pod "kube-controller-manager-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:37.814884  352121 pod_ready.go:83] waiting for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.214236  352121 pod_ready.go:94] pod "kube-proxy-mr7c8" is "Ready"
	I1219 03:06:38.214264  352121 pod_ready.go:86] duration metric: took 399.351737ms for pod "kube-proxy-mr7c8" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.413883  352121 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.813953  352121 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-717222" is "Ready"
	I1219 03:06:38.813980  352121 pod_ready.go:86] duration metric: took 400.070957ms for pod "kube-scheduler-default-k8s-diff-port-717222" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:06:38.813992  352121 pod_ready.go:40] duration metric: took 24.410965975s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:06:38.857548  352121 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:06:38.860155  352121 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-717222" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.736394898Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=717982ce-b0aa-47e4-97b9-7ccc9a3d471e name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.737528512Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=75c156d7-ee04-4c56-8f11-b0efea376553 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.737669801Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.742166616Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.742306458Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4ebf7307f813aac6a8d23d828e41e3d28400a9be6b5a1e96c9c7ac302e20c6f7/merged/etc/passwd: no such file or directory"
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.742328757Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4ebf7307f813aac6a8d23d828e41e3d28400a9be6b5a1e96c9c7ac302e20c6f7/merged/etc/group: no such file or directory"
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.742530495Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.773812294Z" level=info msg="Created container 7d6861325db2a3ac5ccac816c8e37a3daba04c6ecb4c268229d0eed2b47c364f: kube-system/storage-provisioner/storage-provisioner" id=75c156d7-ee04-4c56-8f11-b0efea376553 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.774507779Z" level=info msg="Starting container: 7d6861325db2a3ac5ccac816c8e37a3daba04c6ecb4c268229d0eed2b47c364f" id=d7a3d3c0-89c8-443d-926a-bf8921e3011d name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:05:41 no-preload-278042 crio[566]: time="2025-12-19T03:05:41.776440067Z" level=info msg="Started container" PID=3331 containerID=7d6861325db2a3ac5ccac816c8e37a3daba04c6ecb4c268229d0eed2b47c364f description=kube-system/storage-provisioner/storage-provisioner id=d7a3d3c0-89c8-443d-926a-bf8921e3011d name=/runtime.v1.RuntimeService/StartContainer sandboxID=c464fbce01c73bc9002a59a55e969a9dcc96c829129ee9c487d0762b3a2a4169
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.362057944Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.366564465Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.366589659Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.366607882Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.370444341Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.370467276Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.370484152Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.374344046Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.374374846Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.374396298Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.378400072Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.378429166Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.378444369Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.382115308Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 19 03:05:51 no-preload-278042 crio[566]: time="2025-12-19T03:05:51.382141451Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	7d6861325db2a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Running             storage-provisioner                    1                   c464fbce01c73       storage-provisioner                                     kube-system
	5935e257f3a09       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              18 minutes ago      Running             kubernetes-dashboard-auth              0                   d0d6b23f0e1dc       kubernetes-dashboard-auth-bf9cfccb5-mrw8q               kubernetes-dashboard
	29fec7f14635a       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   18 minutes ago      Running             kubernetes-dashboard-metrics-scraper   0                   0e0159aebbb3f       kubernetes-dashboard-metrics-scraper-594bbfb84b-ncxlk   kubernetes-dashboard
	94493b4e71313       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                           18 minutes ago      Running             proxy                                  0                   a29ecbc685444       kubernetes-dashboard-kong-78b7499b45-z266g              kubernetes-dashboard
	0c57b1705660a       docker.io/library/kong@sha256:73ac10ce4d2c5b3b8b4acd6c8117b4e72d1a201d95be2d51aeae8324d776a108                             18 minutes ago      Exited              clear-stale-pid                        0                   a29ecbc685444       kubernetes-dashboard-kong-78b7499b45-z266g              kubernetes-dashboard
	bba0b0d89d520       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               18 minutes ago      Running             kubernetes-dashboard-web               0                   8dedb4931ab92       kubernetes-dashboard-web-7f7574785f-h2jf5               kubernetes-dashboard
	d438e50bdc5cf       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               18 minutes ago      Running             kubernetes-dashboard-api               0                   2d9da507d045f       kubernetes-dashboard-api-c7898775-zhmv8                 kubernetes-dashboard
	88f8999e01d5b       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                           18 minutes ago      Running             coredns                                0                   192133b79d756       coredns-7d764666f9-vj7lm                                kube-system
	53f1be74e873d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Exited              storage-provisioner                    0                   c464fbce01c73       storage-provisioner                                     kube-system
	bf4ed13bede99       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                                           18 minutes ago      Running             busybox                                1                   1a93d07c85274       busybox                                                 default
	98dcabe770e7d       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                                           18 minutes ago      Running             kindnet-cni                            0                   c96cb5fa17a00       kindnet-xrp2s                                           kube-system
	757ccd2caa9cd       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                                           18 minutes ago      Running             kube-proxy                             0                   4e59b01d6de99       kube-proxy-g2gm4                                        kube-system
	5f148a7e487d8       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                                           18 minutes ago      Running             etcd                                   0                   03f900ecc7129       etcd-no-preload-278042                                  kube-system
	001407ac1b909       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                                           18 minutes ago      Running             kube-controller-manager                0                   d44cf856d1c8b       kube-controller-manager-no-preload-278042               kube-system
	973ccccab2576       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                                           18 minutes ago      Running             kube-scheduler                         0                   3f68017fcfb0f       kube-scheduler-no-preload-278042                        kube-system
	821b9cbc72eb6       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                                           18 minutes ago      Running             kube-apiserver                         0                   46991eb1a5abd       kube-apiserver-no-preload-278042                        kube-system
	
	
	==> coredns [88f8999e01d5bc23ebc968525542d039ae5c65ebd88f7ecad360345dc8277d94] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:57319 - 34037 "HINFO IN 3016703752619529984.3565104935656887276. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019206295s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-278042
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-278042
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=no-preload-278042
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_04_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:04:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-278042
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:23:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:23:53 +0000   Fri, 19 Dec 2025 03:04:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:23:53 +0000   Fri, 19 Dec 2025 03:04:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:23:53 +0000   Fri, 19 Dec 2025 03:04:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:23:53 +0000   Fri, 19 Dec 2025 03:04:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-278042
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                8fbc19b8-72f7-4938-83d9-fc3015dde7d1
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-7d764666f9-vj7lm                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-no-preload-278042                                   100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-xrp2s                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-no-preload-278042                         250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-no-preload-278042                200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-g2gm4                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-no-preload-278042                         100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        kubernetes-dashboard-api-c7898775-zhmv8                  100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-auth-bf9cfccb5-mrw8q                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-kong-78b7499b45-z266g               0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-594bbfb84b-ncxlk    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-web-7f7574785f-h2jf5                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1250m (15%)  1100m (13%)
	  memory             1020Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  19m   node-controller  Node no-preload-278042 event: Registered Node no-preload-278042 in Controller
	  Normal  RegisteredNode  18m   node-controller  Node no-preload-278042 event: Registered Node no-preload-278042 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [5f148a7e487d8dd3e55b516e027c5d508a26bdfe82dca079a9e980052c56ba2a] <==
	{"level":"info","ts":"2025-12-19T03:05:08.315130Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-19T03:05:08.988344Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-19T03:05:08.988403Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-19T03:05:08.988503Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-19T03:05:08.988524Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:05:08.988542Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-19T03:05:08.989244Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-19T03:05:08.989319Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:05:08.989346Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-19T03:05:08.989356Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-19T03:05:08.990632Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:no-preload-278042 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-19T03:05:08.990634Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:05:08.990681Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:05:08.990883Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T03:05:08.991615Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T03:05:08.992858Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:05:08.993684Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:05:09.001234Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T03:05:09.001416Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-19T03:15:09.026171Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":960}
	{"level":"info","ts":"2025-12-19T03:15:09.034559Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":960,"took":"7.955659ms","hash":4263527716,"current-db-size-bytes":3899392,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":3899392,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2025-12-19T03:15:09.034609Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4263527716,"revision":960,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:20:09.031768Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1204}
	{"level":"info","ts":"2025-12-19T03:20:09.034352Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1204,"took":"2.163711ms","hash":2275355149,"current-db-size-bytes":3899392,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":1998848,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-12-19T03:20:09.034391Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2275355149,"revision":1204,"compact-revision":960}
	
	
	==> kernel <==
	 03:24:00 up  1:06,  0 user,  load average: 0.55, 0.54, 1.14
	Linux no-preload-278042 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [98dcabe770e7dcf718bfbc7938b663e3dd19fd9ad86c2bd261a4099febad9b1b] <==
	I1219 03:21:51.360619       1 main.go:301] handling current node
	I1219 03:22:01.369216       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:01.369247       1 main.go:301] handling current node
	I1219 03:22:11.367736       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:11.367767       1 main.go:301] handling current node
	I1219 03:22:21.362127       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:21.362158       1 main.go:301] handling current node
	I1219 03:22:31.365054       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:31.365106       1 main.go:301] handling current node
	I1219 03:22:41.367805       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:41.367840       1 main.go:301] handling current node
	I1219 03:22:51.360347       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:22:51.360384       1 main.go:301] handling current node
	I1219 03:23:01.367434       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:23:01.367473       1 main.go:301] handling current node
	I1219 03:23:11.368784       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:23:11.368827       1 main.go:301] handling current node
	I1219 03:23:21.360067       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:23:21.360103       1 main.go:301] handling current node
	I1219 03:23:31.360833       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:23:31.360886       1 main.go:301] handling current node
	I1219 03:23:41.366471       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:23:41.366500       1 main.go:301] handling current node
	I1219 03:23:51.360858       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1219 03:23:51.360894       1 main.go:301] handling current node
	
	
	==> kube-apiserver [821b9cbc72eb687248584e5b24c8fee86a84d2d127bb4360eeb9e0bc0f2c67ec] <==
	W1219 03:05:13.385125       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.401923       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.413483       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.423560       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.434652       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.450356       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.470070       1 logging.go:55] [core] [Channel #283 SubChannel #284]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.481151       1 logging.go:55] [core] [Channel #287 SubChannel #288]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.492407       1 logging.go:55] [core] [Channel #291 SubChannel #292]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.503960       1 logging.go:55] [core] [Channel #295 SubChannel #296]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.519221       1 logging.go:55] [core] [Channel #299 SubChannel #300]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:13.528090       1 logging.go:55] [core] [Channel #303 SubChannel #304]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1219 03:05:13.711310       1 controller.go:667] quota admission added evaluator for: endpoints
	I1219 03:05:13.761392       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:05:13.862098       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:05:13.961908       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 03:05:15.702973       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 03:05:15.771287       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:05:15.776040       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:05:15.788145       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.102.118.21"}
	I1219 03:05:15.795336       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.103.152.147"}
	I1219 03:05:15.798838       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.54.162"}
	I1219 03:05:15.807348       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.97.173.60"}
	I1219 03:05:15.813204       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.106.235.156"}
	I1219 03:15:10.324126       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [001407ac1b90920309b7c79710e785dd82d392c6a8a43c444682d0837efd8cae] <==
	I1219 03:05:13.463362       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463414       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463386       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1219 03:05:13.463438       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463465       1 range_allocator.go:177] "Sending events to api server"
	I1219 03:05:13.463505       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1219 03:05:13.463516       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:05:13.463521       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463634       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463681       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.463711       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464012       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464187       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464219       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464367       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464376       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464393       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.464411       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:13.472055       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:05:14.564522       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:14.564546       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1219 03:05:14.564553       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:14.564553       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1219 03:05:14.572694       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:14.581900       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [757ccd2caa9cd35651079514b95b85a3612146f0d5b17fa735322d1e2ee036f1] <==
	I1219 03:05:11.015248       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:05:11.078140       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:05:11.178544       1 shared_informer.go:377] "Caches are synced"
	I1219 03:05:11.178579       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1219 03:05:11.178664       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:05:11.202324       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:05:11.202395       1 server_linux.go:136] "Using iptables Proxier"
	I1219 03:05:11.207676       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:05:11.208164       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 03:05:11.208215       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:11.212272       1 config.go:200] "Starting service config controller"
	I1219 03:05:11.212297       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:05:11.212328       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:05:11.212333       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:05:11.212401       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:05:11.212410       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:05:11.212604       1 config.go:309] "Starting node config controller"
	I1219 03:05:11.212646       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:05:11.212671       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:05:11.313219       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:05:11.313270       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:05:11.313557       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [973ccccab25764a0f9dd88e6a5d1d5565060554451f1dcc3dd0e68a9aed4c9f2] <==
	I1219 03:05:08.762319       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:05:10.311124       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:05:10.311291       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:05:10.311314       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:05:10.311345       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:05:10.339015       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1219 03:05:10.339346       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:10.343655       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:10.343694       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:05:10.345418       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:05:10.347040       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:05:10.447312       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 19 03:20:26 no-preload-278042 kubelet[713]: E1219 03:20:26.562657     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-278042" containerName="etcd"
	Dec 19 03:20:30 no-preload-278042 kubelet[713]: E1219 03:20:30.562630     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-z266g" containerName="proxy"
	Dec 19 03:20:30 no-preload-278042 kubelet[713]: E1219 03:20:30.562787     713 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-ncxlk" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:20:48 no-preload-278042 kubelet[713]: E1219 03:20:48.562287     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-278042" containerName="kube-scheduler"
	Dec 19 03:20:54 no-preload-278042 kubelet[713]: E1219 03:20:54.562796     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-278042" containerName="kube-apiserver"
	Dec 19 03:21:06 no-preload-278042 kubelet[713]: E1219 03:21:06.562680     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-278042" containerName="kube-controller-manager"
	Dec 19 03:21:11 no-preload-278042 kubelet[713]: E1219 03:21:11.563417     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vj7lm" containerName="coredns"
	Dec 19 03:21:28 no-preload-278042 kubelet[713]: E1219 03:21:28.562333     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-278042" containerName="etcd"
	Dec 19 03:21:31 no-preload-278042 kubelet[713]: E1219 03:21:31.563340     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-z266g" containerName="proxy"
	Dec 19 03:21:53 no-preload-278042 kubelet[713]: E1219 03:21:53.563344     713 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-ncxlk" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:22:03 no-preload-278042 kubelet[713]: E1219 03:22:03.563479     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-278042" containerName="kube-scheduler"
	Dec 19 03:22:18 no-preload-278042 kubelet[713]: E1219 03:22:18.562844     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-278042" containerName="kube-apiserver"
	Dec 19 03:22:26 no-preload-278042 kubelet[713]: E1219 03:22:26.562406     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-278042" containerName="kube-controller-manager"
	Dec 19 03:22:37 no-preload-278042 kubelet[713]: E1219 03:22:37.563042     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vj7lm" containerName="coredns"
	Dec 19 03:22:49 no-preload-278042 kubelet[713]: E1219 03:22:49.563063     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-z266g" containerName="proxy"
	Dec 19 03:22:54 no-preload-278042 kubelet[713]: E1219 03:22:54.563196     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-278042" containerName="etcd"
	Dec 19 03:23:18 no-preload-278042 kubelet[713]: E1219 03:23:18.563266     713 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-ncxlk" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:23:26 no-preload-278042 kubelet[713]: E1219 03:23:26.562431     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-278042" containerName="kube-scheduler"
	Dec 19 03:23:48 no-preload-278042 kubelet[713]: E1219 03:23:48.562683     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-278042" containerName="kube-apiserver"
	Dec 19 03:23:49 no-preload-278042 kubelet[713]: E1219 03:23:49.562573     713 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-vj7lm" containerName="coredns"
	Dec 19 03:23:51 no-preload-278042 kubelet[713]: E1219 03:23:51.563334     713 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-278042" containerName="kube-controller-manager"
	Dec 19 03:23:55 no-preload-278042 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 19 03:23:55 no-preload-278042 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 19 03:23:55 no-preload-278042 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 19 03:23:55 no-preload-278042 systemd[1]: kubelet.service: Consumed 25.087s CPU time.
	
	
	==> kubernetes-dashboard [29fec7f14635a794f200efe276e62a0fc3151ea3d427cb21da297c53114fd8b9] <==
	E1219 03:21:25.195161       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:22:25.195098       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:23:25.194956       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	10.244.0.1 - - [19/Dec/2025:03:21:17 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:21:25 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:21:35 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:21:45 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:21:47 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:21:55 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:22:05 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:22:15 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:22:17 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:25 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:22:35 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:22:45 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:22:47 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:55 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:23:05 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:23:15 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:23:17 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:25 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:23:35 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:23:45 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:23:47 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:55 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	
	
	==> kubernetes-dashboard [5935e257f3a0964bf239f408c9308c3b84961c75f06f32b2fda50133fe1ddbbd] <==
	I1219 03:05:26.300513       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:05:26.300578       1 init.go:49] Using in-cluster config
	I1219 03:05:26.300723       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [bba0b0d89d520cc6ca6a07611a31a2778cb1e41e66784ac255b63f970adcffb7] <==
	I1219 03:05:19.397607       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:05:19.397662       1 init.go:48] Using in-cluster config
	I1219 03:05:19.397903       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [d438e50bdc5cf86c6ad101cf6a3ca9c6c7091524bb7ffd95705de1d1a5ed8994] <==
	I1219 03:05:17.224225       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:05:17.224299       1 init.go:49] Using in-cluster config
	I1219 03:05:17.224498       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:05:17.224512       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:05:17.224518       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:05:17.224524       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:05:17.230241       1 main.go:119] "Successful initial request to the apiserver" version="v1.35.0-rc.1"
	I1219 03:05:17.230266       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:05:17.233542       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 03:05:17.236374       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	I1219 03:05:47.240946       1 manager.go:101] Successful request to sidecar
	
	
	==> storage-provisioner [53f1be74e873df0c32c600b228ba909dde859aa38c23f9a71f536c90aa4e096f] <==
	I1219 03:05:10.950483       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:05:40.952323       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7d6861325db2a3ac5ccac816c8e37a3daba04c6ecb4c268229d0eed2b47c364f] <==
	W1219 03:23:35.369382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:37.372268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:37.377522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:39.380227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:39.384143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:41.387062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:41.392658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:43.395737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:43.399497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:45.402464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:45.406320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:47.409103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:47.413050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:49.416534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:49.421078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:51.424048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:51.427837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:53.432223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:53.437004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:55.440893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:55.444727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:57.448825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:57.453580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:59.456508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:23:59.461357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-278042 -n no-preload-278042
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-278042 -n no-preload-278042: exit status 2 (345.129965ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-278042 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-837172 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-837172 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (309.245217ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:24:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-837172 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-837172
helpers_test.go:244: (dbg) docker inspect newest-cni-837172:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "351fe078c7b374b02eac438980bbfd60e0b4b9abb1fae5a6897f5bc143480d83",
	        "Created": "2025-12-19T03:24:05.774434179Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 372949,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:24:05.814858866Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/351fe078c7b374b02eac438980bbfd60e0b4b9abb1fae5a6897f5bc143480d83/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/351fe078c7b374b02eac438980bbfd60e0b4b9abb1fae5a6897f5bc143480d83/hostname",
	        "HostsPath": "/var/lib/docker/containers/351fe078c7b374b02eac438980bbfd60e0b4b9abb1fae5a6897f5bc143480d83/hosts",
	        "LogPath": "/var/lib/docker/containers/351fe078c7b374b02eac438980bbfd60e0b4b9abb1fae5a6897f5bc143480d83/351fe078c7b374b02eac438980bbfd60e0b4b9abb1fae5a6897f5bc143480d83-json.log",
	        "Name": "/newest-cni-837172",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-837172:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-837172",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "351fe078c7b374b02eac438980bbfd60e0b4b9abb1fae5a6897f5bc143480d83",
	                "LowerDir": "/var/lib/docker/overlay2/41fe36c0fb6be8853bfc37ac0758cc1fc327dec5ab6c2b0699f7042493b2612f-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/41fe36c0fb6be8853bfc37ac0758cc1fc327dec5ab6c2b0699f7042493b2612f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/41fe36c0fb6be8853bfc37ac0758cc1fc327dec5ab6c2b0699f7042493b2612f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/41fe36c0fb6be8853bfc37ac0758cc1fc327dec5ab6c2b0699f7042493b2612f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-837172",
	                "Source": "/var/lib/docker/volumes/newest-cni-837172/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-837172",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-837172",
	                "name.minikube.sigs.k8s.io": "newest-cni-837172",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c9fa609aa310d33d00185c9f0c777e6945683729fd6355f5ab85bb3940b0d051",
	            "SandboxKey": "/var/run/docker/netns/c9fa609aa310",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-837172": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "031824ca2cfc364eb4fab915cefaa7a9d15393eeb43e3a28ecfa7e5605c16dd1",
	                    "EndpointID": "066372cf12e2fd97f3fb23a7ac1d5f8eb615c9af2c55b14b450d57fb3b44ee2e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "56:d8:1f:9a:12:a5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-837172",
	                        "351fe078c7b3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-837172 -n newest-cni-837172
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-837172 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ stop    │ -p old-k8s-version-433330 --alsologtostderr -v=3                                                                                                                                                                                                   │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ stop    │ -p no-preload-278042 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-433330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-433330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p no-preload-278042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-278042 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p embed-certs-805185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p embed-certs-805185 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-717222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-717222 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-805185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-717222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ image   │ old-k8s-version-433330 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p old-k8s-version-433330 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ image   │ no-preload-278042 image list --format=json                                                                                                                                                                                                         │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p no-preload-278042 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ delete  │ -p old-k8s-version-433330                                                                                                                                                                                                                          │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p old-k8s-version-433330                                                                                                                                                                                                                          │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ start   │ -p newest-cni-837172 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p no-preload-278042                                                                                                                                                                                                                               │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p no-preload-278042                                                                                                                                                                                                                               │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ addons  │ enable metrics-server -p newest-cni-837172 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:24:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:24:01.036023  371990 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:24:01.036565  371990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:01.036582  371990 out.go:374] Setting ErrFile to fd 2...
	I1219 03:24:01.036589  371990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:01.037114  371990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:24:01.038234  371990 out.go:368] Setting JSON to false
	I1219 03:24:01.039510  371990 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3992,"bootTime":1766110649,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:24:01.039592  371990 start.go:143] virtualization: kvm guest
	I1219 03:24:01.041656  371990 out.go:179] * [newest-cni-837172] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:24:01.043211  371990 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:24:01.043253  371990 notify.go:221] Checking for updates...
	I1219 03:24:01.045604  371990 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:24:01.046873  371990 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:24:01.047985  371990 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:24:01.052214  371990 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:24:01.053413  371990 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:24:01.055079  371990 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:24:01.055198  371990 config.go:182] Loaded profile config "embed-certs-805185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:24:01.055324  371990 config.go:182] Loaded profile config "no-preload-278042": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:01.055430  371990 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:24:01.080518  371990 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:24:01.080672  371990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:01.143010  371990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-19 03:24:01.132535066 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:01.143105  371990 docker.go:319] overlay module found
	I1219 03:24:01.144954  371990 out.go:179] * Using the docker driver based on user configuration
	I1219 03:24:01.146278  371990 start.go:309] selected driver: docker
	I1219 03:24:01.146299  371990 start.go:928] validating driver "docker" against <nil>
	I1219 03:24:01.146315  371990 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:24:01.147198  371990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:01.207023  371990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-19 03:24:01.196664778 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:01.207180  371990 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1219 03:24:01.207207  371990 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1219 03:24:01.207525  371990 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:24:01.209632  371990 out.go:179] * Using Docker driver with root privileges
	I1219 03:24:01.210891  371990 cni.go:84] Creating CNI manager for ""
	I1219 03:24:01.210974  371990 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:24:01.210985  371990 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1219 03:24:01.211049  371990 start.go:353] cluster config:
	{Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:01.212320  371990 out.go:179] * Starting "newest-cni-837172" primary control-plane node in "newest-cni-837172" cluster
	I1219 03:24:01.213422  371990 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:24:01.214779  371990 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:24:01.215953  371990 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:24:01.216006  371990 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1219 03:24:01.216025  371990 cache.go:65] Caching tarball of preloaded images
	I1219 03:24:01.216047  371990 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:24:01.216120  371990 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:24:01.216133  371990 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1219 03:24:01.216218  371990 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/config.json ...
	I1219 03:24:01.216239  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/config.json: {Name:mkf2bb7657c731e279d378a607e1a523b320a47e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:01.237349  371990 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:24:01.237368  371990 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:24:01.237386  371990 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:24:01.237420  371990 start.go:360] acquireMachinesLock for newest-cni-837172: {Name:mk7db0c3e7d0fad2e8a414af15e136d817446719 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:24:01.237512  371990 start.go:364] duration metric: took 75.602µs to acquireMachinesLock for "newest-cni-837172"
	I1219 03:24:01.237534  371990 start.go:93] Provisioning new machine with config: &{Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:24:01.237590  371990 start.go:125] createHost starting for "" (driver="docker")
	I1219 03:24:01.239751  371990 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1219 03:24:01.239974  371990 start.go:159] libmachine.API.Create for "newest-cni-837172" (driver="docker")
	I1219 03:24:01.240017  371990 client.go:173] LocalClient.Create starting
	I1219 03:24:01.240087  371990 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem
	I1219 03:24:01.240117  371990 main.go:144] libmachine: Decoding PEM data...
	I1219 03:24:01.240136  371990 main.go:144] libmachine: Parsing certificate...
	I1219 03:24:01.240185  371990 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem
	I1219 03:24:01.240204  371990 main.go:144] libmachine: Decoding PEM data...
	I1219 03:24:01.240213  371990 main.go:144] libmachine: Parsing certificate...
	I1219 03:24:01.240512  371990 cli_runner.go:164] Run: docker network inspect newest-cni-837172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1219 03:24:01.257883  371990 cli_runner.go:211] docker network inspect newest-cni-837172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1219 03:24:01.258008  371990 network_create.go:284] running [docker network inspect newest-cni-837172] to gather additional debugging logs...
	I1219 03:24:01.258034  371990 cli_runner.go:164] Run: docker network inspect newest-cni-837172
	W1219 03:24:01.275377  371990 cli_runner.go:211] docker network inspect newest-cni-837172 returned with exit code 1
	I1219 03:24:01.275412  371990 network_create.go:287] error running [docker network inspect newest-cni-837172]: docker network inspect newest-cni-837172: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-837172 not found
	I1219 03:24:01.275429  371990 network_create.go:289] output of [docker network inspect newest-cni-837172]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-837172 not found
	
	** /stderr **
	I1219 03:24:01.275535  371990 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:24:01.294388  371990 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d70e62b79a31 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:cf:22:72:cb:a0} reservation:<nil>}
	I1219 03:24:01.295272  371990 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-980aea652065 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ba:dd:9c:97:fb:7d} reservation:<nil>}
	I1219 03:24:01.296258  371990 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-42b42f6a5044 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a2:1e:31:1b:21:84} reservation:<nil>}
	I1219 03:24:01.297569  371990 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec48c0}
	I1219 03:24:01.297599  371990 network_create.go:124] attempt to create docker network newest-cni-837172 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1219 03:24:01.297651  371990 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-837172 newest-cni-837172
	I1219 03:24:01.350655  371990 network_create.go:108] docker network newest-cni-837172 192.168.76.0/24 created
	I1219 03:24:01.350682  371990 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-837172" container
	I1219 03:24:01.350794  371990 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1219 03:24:01.370331  371990 cli_runner.go:164] Run: docker volume create newest-cni-837172 --label name.minikube.sigs.k8s.io=newest-cni-837172 --label created_by.minikube.sigs.k8s.io=true
	I1219 03:24:01.391519  371990 oci.go:103] Successfully created a docker volume newest-cni-837172
	I1219 03:24:01.391624  371990 cli_runner.go:164] Run: docker run --rm --name newest-cni-837172-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-837172 --entrypoint /usr/bin/test -v newest-cni-837172:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1219 03:24:01.840345  371990 oci.go:107] Successfully prepared a docker volume newest-cni-837172
	I1219 03:24:01.840449  371990 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:24:01.840465  371990 kic.go:194] Starting extracting preloaded images to volume ...
	I1219 03:24:01.840529  371990 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-837172:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1219 03:24:05.697885  371990 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-837172:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.857303195s)
	I1219 03:24:05.697924  371990 kic.go:203] duration metric: took 3.857455339s to extract preloaded images to volume ...
	W1219 03:24:05.698024  371990 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1219 03:24:05.698058  371990 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1219 03:24:05.698100  371990 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1219 03:24:05.757547  371990 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-837172 --name newest-cni-837172 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-837172 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-837172 --network newest-cni-837172 --ip 192.168.76.2 --volume newest-cni-837172:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1219 03:24:06.051568  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Running}}
	I1219 03:24:06.072261  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:06.093313  371990 cli_runner.go:164] Run: docker exec newest-cni-837172 stat /var/lib/dpkg/alternatives/iptables
	I1219 03:24:06.144238  371990 oci.go:144] the created container "newest-cni-837172" has a running status.
	I1219 03:24:06.144278  371990 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa...
	I1219 03:24:06.230796  371990 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1219 03:24:06.256299  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:06.273734  371990 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1219 03:24:06.273758  371990 kic_runner.go:114] Args: [docker exec --privileged newest-cni-837172 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1219 03:24:06.341522  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:06.363532  371990 machine.go:94] provisionDockerMachine start ...
	I1219 03:24:06.363655  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:06.390168  371990 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:06.390536  371990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1219 03:24:06.390552  371990 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:24:06.391620  371990 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34054->127.0.0.1:33138: read: connection reset by peer
	I1219 03:24:09.536680  371990 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-837172
	
	I1219 03:24:09.536733  371990 ubuntu.go:182] provisioning hostname "newest-cni-837172"
	I1219 03:24:09.536797  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:09.555045  371990 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:09.555325  371990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1219 03:24:09.555340  371990 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-837172 && echo "newest-cni-837172" | sudo tee /etc/hostname
	I1219 03:24:09.709116  371990 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-837172
	
	I1219 03:24:09.709183  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:09.727847  371990 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:09.728289  371990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1219 03:24:09.728322  371990 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-837172' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-837172/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-837172' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:24:09.871486  371990 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:24:09.871529  371990 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 03:24:09.871588  371990 ubuntu.go:190] setting up certificates
	I1219 03:24:09.871600  371990 provision.go:84] configureAuth start
	I1219 03:24:09.871666  371990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-837172
	I1219 03:24:09.890551  371990 provision.go:143] copyHostCerts
	I1219 03:24:09.890608  371990 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem, removing ...
	I1219 03:24:09.890616  371990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem
	I1219 03:24:09.890710  371990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 03:24:09.890819  371990 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem, removing ...
	I1219 03:24:09.890829  371990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem
	I1219 03:24:09.890867  371990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 03:24:09.890920  371990 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem, removing ...
	I1219 03:24:09.890933  371990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem
	I1219 03:24:09.890959  371990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 03:24:09.891015  371990 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.newest-cni-837172 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-837172]
	I1219 03:24:09.923962  371990 provision.go:177] copyRemoteCerts
	I1219 03:24:09.924021  371990 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:24:09.924055  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:09.943177  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.046012  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:24:10.066001  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1219 03:24:10.083456  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 03:24:10.101464  371990 provision.go:87] duration metric: took 229.847544ms to configureAuth
	I1219 03:24:10.101492  371990 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:24:10.101673  371990 config.go:182] Loaded profile config "newest-cni-837172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:10.101801  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.120532  371990 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:10.120821  371990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1219 03:24:10.120839  371990 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:24:10.410477  371990 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:24:10.410502  371990 machine.go:97] duration metric: took 4.046944113s to provisionDockerMachine
	I1219 03:24:10.410513  371990 client.go:176] duration metric: took 9.170488353s to LocalClient.Create
	I1219 03:24:10.410535  371990 start.go:167] duration metric: took 9.170561433s to libmachine.API.Create "newest-cni-837172"
	I1219 03:24:10.410546  371990 start.go:293] postStartSetup for "newest-cni-837172" (driver="docker")
	I1219 03:24:10.410559  371990 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:24:10.410613  371990 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:24:10.410664  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.430222  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.533641  371990 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:24:10.537745  371990 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:24:10.537783  371990 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:24:10.537806  371990 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 03:24:10.537857  371990 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 03:24:10.537934  371990 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem -> 85362.pem in /etc/ssl/certs
	I1219 03:24:10.538030  371990 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:24:10.545818  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:24:10.566832  371990 start.go:296] duration metric: took 156.272185ms for postStartSetup
	I1219 03:24:10.567244  371990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-837172
	I1219 03:24:10.586641  371990 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/config.json ...
	I1219 03:24:10.586934  371990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:24:10.586987  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.604894  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.703924  371990 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:24:10.708480  371990 start.go:128] duration metric: took 9.470874061s to createHost
	I1219 03:24:10.708519  371990 start.go:83] releasing machines lock for "newest-cni-837172", held for 9.47099552s
	I1219 03:24:10.708596  371990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-837172
	I1219 03:24:10.727823  371990 ssh_runner.go:195] Run: cat /version.json
	I1219 03:24:10.727853  371990 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:24:10.727877  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.727922  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.748155  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.748577  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.899556  371990 ssh_runner.go:195] Run: systemctl --version
	I1219 03:24:10.906157  371990 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:24:10.942010  371990 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:24:10.946776  371990 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:24:10.946834  371990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:24:10.972921  371990 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 03:24:10.972943  371990 start.go:496] detecting cgroup driver to use...
	I1219 03:24:10.972971  371990 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:24:10.973032  371990 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:24:10.989146  371990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:24:11.002203  371990 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:24:11.002282  371990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:24:11.018422  371990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:24:11.035554  371990 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:24:11.119919  371990 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:24:11.207179  371990 docker.go:234] disabling docker service ...
	I1219 03:24:11.207252  371990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:24:11.225572  371990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:24:11.237859  371990 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:24:11.323024  371990 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:24:11.407303  371990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:24:11.419524  371990 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:24:11.433341  371990 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:24:11.433395  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.443408  371990 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 03:24:11.443468  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.452460  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.460889  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.469451  371990 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:24:11.477277  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.485766  371990 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.499106  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.508174  371990 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:24:11.515313  371990 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:24:11.522319  371990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:11.604796  371990 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:24:11.746317  371990 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:24:11.746376  371990 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:24:11.750220  371990 start.go:564] Will wait 60s for crictl version
	I1219 03:24:11.750278  371990 ssh_runner.go:195] Run: which crictl
	I1219 03:24:11.753821  371990 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:24:11.777608  371990 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 03:24:11.777714  371990 ssh_runner.go:195] Run: crio --version
	I1219 03:24:11.804073  371990 ssh_runner.go:195] Run: crio --version
	I1219 03:24:11.833640  371990 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1219 03:24:11.834886  371990 cli_runner.go:164] Run: docker network inspect newest-cni-837172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:24:11.852567  371990 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1219 03:24:11.856667  371990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:24:11.871316  371990 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1219 03:24:11.872497  371990 kubeadm.go:884] updating cluster {Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:24:11.872642  371990 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:24:11.872692  371990 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:24:11.904183  371990 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:24:11.904204  371990 crio.go:433] Images already preloaded, skipping extraction
	I1219 03:24:11.904263  371990 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:24:11.930999  371990 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:24:11.931020  371990 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:24:11.931026  371990 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1219 03:24:11.931148  371990 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-837172 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:24:11.931228  371990 ssh_runner.go:195] Run: crio config
	I1219 03:24:11.976472  371990 cni.go:84] Creating CNI manager for ""
	I1219 03:24:11.976491  371990 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:24:11.976503  371990 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1219 03:24:11.976531  371990 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-837172 NodeName:newest-cni-837172 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:24:11.976658  371990 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-837172"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:24:11.976739  371990 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1219 03:24:11.985021  371990 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:24:11.985080  371990 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:24:11.992859  371990 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1219 03:24:12.006496  371990 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1219 03:24:12.021643  371990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1219 03:24:12.034441  371990 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:24:12.038092  371990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:24:12.047986  371990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:12.128789  371990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:24:12.152988  371990 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172 for IP: 192.168.76.2
	I1219 03:24:12.153016  371990 certs.go:195] generating shared ca certs ...
	I1219 03:24:12.153035  371990 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.153175  371990 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 03:24:12.153220  371990 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 03:24:12.153233  371990 certs.go:257] generating profile certs ...
	I1219 03:24:12.153289  371990 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.key
	I1219 03:24:12.153302  371990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.crt with IP's: []
	I1219 03:24:12.271406  371990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.crt ...
	I1219 03:24:12.271435  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.crt: {Name:mke8fed86df635a05f54420e92870363146991f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.271601  371990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.key ...
	I1219 03:24:12.271612  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.key: {Name:mk39737e3f76352137132fe8060ef391a0d43bc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.271690  371990 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b
	I1219 03:24:12.271717  371990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt.48a05c1b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1219 03:24:12.379475  371990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt.48a05c1b ...
	I1219 03:24:12.379503  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt.48a05c1b: {Name:mkc4d74c8f8c4deb077c8f688d203329a2c5750d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.379662  371990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b ...
	I1219 03:24:12.379675  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b: {Name:mk1b93ad6f4ca843c3104dc76975062dde81eaef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.379761  371990 certs.go:382] copying /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt.48a05c1b -> /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt
	I1219 03:24:12.379853  371990 certs.go:386] copying /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b -> /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key
	I1219 03:24:12.379918  371990 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key
	I1219 03:24:12.379940  371990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt with IP's: []
	I1219 03:24:12.467338  371990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt ...
	I1219 03:24:12.467368  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt: {Name:mk5dc8f653da407b5f14ca799301800eac0952c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.467561  371990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key ...
	I1219 03:24:12.467581  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key: {Name:mk4063cc1af4dbf73c9c390b468c828c35385b5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.467821  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem (1338 bytes)
	W1219 03:24:12.467864  371990 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536_empty.pem, impossibly tiny 0 bytes
	I1219 03:24:12.467875  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:24:12.467901  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:24:12.467925  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:24:12.467953  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 03:24:12.468001  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:24:12.468519  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:24:12.487159  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:24:12.504306  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:24:12.521550  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 03:24:12.538418  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1219 03:24:12.554861  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:24:12.572166  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:24:12.589324  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:24:12.606224  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:24:12.625269  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem --> /usr/share/ca-certificates/8536.pem (1338 bytes)
	I1219 03:24:12.642642  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /usr/share/ca-certificates/85362.pem (1708 bytes)
	I1219 03:24:12.658965  371990 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:24:12.671458  371990 ssh_runner.go:195] Run: openssl version
	I1219 03:24:12.677537  371990 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:12.684496  371990 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:24:12.691660  371990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:12.695495  371990 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:12.695541  371990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:12.730806  371990 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:24:12.738920  371990 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 03:24:12.746295  371990 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8536.pem
	I1219 03:24:12.753462  371990 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8536.pem /etc/ssl/certs/8536.pem
	I1219 03:24:12.760758  371990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8536.pem
	I1219 03:24:12.764356  371990 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:33 /usr/share/ca-certificates/8536.pem
	I1219 03:24:12.764415  371990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8536.pem
	I1219 03:24:12.800484  371990 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:24:12.809192  371990 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8536.pem /etc/ssl/certs/51391683.0
	I1219 03:24:12.816759  371990 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85362.pem
	I1219 03:24:12.825274  371990 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85362.pem /etc/ssl/certs/85362.pem
	I1219 03:24:12.833125  371990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85362.pem
	I1219 03:24:12.836939  371990 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:33 /usr/share/ca-certificates/85362.pem
	I1219 03:24:12.836993  371990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85362.pem
	I1219 03:24:12.871891  371990 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:24:12.879672  371990 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85362.pem /etc/ssl/certs/3ec20f2e.0
	I1219 03:24:12.887040  371990 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:24:12.890648  371990 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1219 03:24:12.890729  371990 kubeadm.go:401] StartCluster: {Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:12.890825  371990 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:24:12.890893  371990 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:24:12.920058  371990 cri.go:92] found id: ""
	I1219 03:24:12.920133  371990 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:24:12.928606  371990 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 03:24:12.936934  371990 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1219 03:24:12.936985  371990 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 03:24:12.945218  371990 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 03:24:12.945240  371990 kubeadm.go:158] found existing configuration files:
	
	I1219 03:24:12.945287  371990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1219 03:24:12.952614  371990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 03:24:12.952666  371990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 03:24:12.960262  371990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1219 03:24:12.967725  371990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 03:24:12.967831  371990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 03:24:12.975015  371990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1219 03:24:12.982506  371990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 03:24:12.982549  371990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 03:24:12.989686  371990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1219 03:24:12.997834  371990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 03:24:12.997888  371990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 03:24:13.005263  371990 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1219 03:24:13.041610  371990 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1219 03:24:13.041730  371990 kubeadm.go:319] [preflight] Running pre-flight checks
	I1219 03:24:13.106822  371990 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1219 03:24:13.106921  371990 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1219 03:24:13.106982  371990 kubeadm.go:319] OS: Linux
	I1219 03:24:13.107046  371990 kubeadm.go:319] CGROUPS_CPU: enabled
	I1219 03:24:13.107146  371990 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1219 03:24:13.107237  371990 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1219 03:24:13.107288  371990 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1219 03:24:13.107344  371990 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1219 03:24:13.107385  371990 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1219 03:24:13.107463  371990 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1219 03:24:13.107538  371990 kubeadm.go:319] CGROUPS_IO: enabled
	I1219 03:24:13.164958  371990 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1219 03:24:13.165152  371990 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1219 03:24:13.165292  371990 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1219 03:24:13.174971  371990 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1219 03:24:13.178028  371990 out.go:252]   - Generating certificates and keys ...
	I1219 03:24:13.178136  371990 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1219 03:24:13.178232  371990 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1219 03:24:13.301903  371990 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1219 03:24:13.387971  371990 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1219 03:24:13.500057  371990 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1219 03:24:13.603458  371990 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1219 03:24:13.636925  371990 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1219 03:24:13.637122  371990 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-837172] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1219 03:24:13.836231  371990 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1219 03:24:13.836371  371990 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-837172] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1219 03:24:14.002346  371990 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1219 03:24:14.032095  371990 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1219 03:24:14.137234  371990 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1219 03:24:14.137362  371990 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1219 03:24:14.167788  371990 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1219 03:24:14.256296  371990 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1219 03:24:14.335846  371990 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1219 03:24:14.409462  371990 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1219 03:24:14.592839  371990 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1219 03:24:14.593412  371990 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1219 03:24:14.597164  371990 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1219 03:24:14.598823  371990 out.go:252]   - Booting up control plane ...
	I1219 03:24:14.598951  371990 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1219 03:24:14.599066  371990 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1219 03:24:14.599695  371990 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1219 03:24:14.613628  371990 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1219 03:24:14.613794  371990 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1219 03:24:14.621414  371990 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1219 03:24:14.621682  371990 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1219 03:24:14.621767  371990 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1219 03:24:14.720948  371990 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1219 03:24:14.721103  371990 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1219 03:24:15.222675  371990 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.8355ms
	I1219 03:24:15.227351  371990 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1219 03:24:15.227489  371990 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1219 03:24:15.227609  371990 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1219 03:24:15.227757  371990 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1219 03:24:16.232434  371990 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004794877s
	I1219 03:24:16.822339  371990 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.594795775s
	I1219 03:24:18.729241  371990 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501609989s
	I1219 03:24:18.747830  371990 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1219 03:24:18.757789  371990 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1219 03:24:18.768843  371990 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1219 03:24:18.769101  371990 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-837172 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1219 03:24:18.777248  371990 kubeadm.go:319] [bootstrap-token] Using token: tjh3gu.t27j0f9f7y1maup8
	I1219 03:24:18.778596  371990 out.go:252]   - Configuring RBAC rules ...
	I1219 03:24:18.778756  371990 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1219 03:24:18.782127  371990 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1219 03:24:18.788723  371990 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1219 03:24:18.791752  371990 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1219 03:24:18.794369  371990 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1219 03:24:18.796980  371990 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1219 03:24:19.135416  371990 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1219 03:24:19.551422  371990 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1219 03:24:20.135668  371990 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1219 03:24:20.136573  371990 kubeadm.go:319] 
	I1219 03:24:20.136667  371990 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1219 03:24:20.136677  371990 kubeadm.go:319] 
	I1219 03:24:20.136815  371990 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1219 03:24:20.136852  371990 kubeadm.go:319] 
	I1219 03:24:20.136883  371990 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1219 03:24:20.136970  371990 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1219 03:24:20.137020  371990 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1219 03:24:20.137026  371990 kubeadm.go:319] 
	I1219 03:24:20.137089  371990 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1219 03:24:20.137101  371990 kubeadm.go:319] 
	I1219 03:24:20.137171  371990 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1219 03:24:20.137179  371990 kubeadm.go:319] 
	I1219 03:24:20.137247  371990 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1219 03:24:20.137362  371990 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1219 03:24:20.137462  371990 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1219 03:24:20.137475  371990 kubeadm.go:319] 
	I1219 03:24:20.137594  371990 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1219 03:24:20.137725  371990 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1219 03:24:20.137741  371990 kubeadm.go:319] 
	I1219 03:24:20.137841  371990 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tjh3gu.t27j0f9f7y1maup8 \
	I1219 03:24:20.137977  371990 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e8b10e60f3db527579d1c34bb8f4e490eb8eee3e7862dee81a2c160635afa3a8 \
	I1219 03:24:20.138014  371990 kubeadm.go:319] 	--control-plane 
	I1219 03:24:20.138022  371990 kubeadm.go:319] 
	I1219 03:24:20.138116  371990 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1219 03:24:20.138124  371990 kubeadm.go:319] 
	I1219 03:24:20.138229  371990 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tjh3gu.t27j0f9f7y1maup8 \
	I1219 03:24:20.138367  371990 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e8b10e60f3db527579d1c34bb8f4e490eb8eee3e7862dee81a2c160635afa3a8 
	I1219 03:24:20.141307  371990 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1219 03:24:20.141417  371990 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1219 03:24:20.141469  371990 cni.go:84] Creating CNI manager for ""
	I1219 03:24:20.141490  371990 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:24:20.143537  371990 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1219 03:24:20.144502  371990 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1219 03:24:20.148822  371990 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl ...
	I1219 03:24:20.148843  371990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1219 03:24:20.161612  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1219 03:24:20.379173  371990 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:24:20.379262  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:20.379275  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-837172 minikube.k8s.io/updated_at=2025_12_19T03_24_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6 minikube.k8s.io/name=newest-cni-837172 minikube.k8s.io/primary=true
	I1219 03:24:20.388746  371990 ops.go:34] apiserver oom_adj: -16
	I1219 03:24:20.454762  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:20.955824  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:21.454834  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:21.954831  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:22.455563  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:22.955820  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:23.454808  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:23.955426  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:24.454807  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:24.521140  371990 kubeadm.go:1114] duration metric: took 4.141930442s to wait for elevateKubeSystemPrivileges
	I1219 03:24:24.521185  371990 kubeadm.go:403] duration metric: took 11.630460792s to StartCluster
	I1219 03:24:24.521209  371990 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:24.521280  371990 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:24:24.522690  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:24.522969  371990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1219 03:24:24.522985  371990 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:24:24.523053  371990 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:24:24.523152  371990 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-837172"
	I1219 03:24:24.523166  371990 addons.go:70] Setting default-storageclass=true in profile "newest-cni-837172"
	I1219 03:24:24.523191  371990 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-837172"
	I1219 03:24:24.523195  371990 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-837172"
	I1219 03:24:24.523231  371990 host.go:66] Checking if "newest-cni-837172" exists ...
	I1219 03:24:24.523251  371990 config.go:182] Loaded profile config "newest-cni-837172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:24.523588  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:24.523773  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:24.524387  371990 out.go:179] * Verifying Kubernetes components...
	I1219 03:24:24.525579  371990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:24.547572  371990 addons.go:239] Setting addon default-storageclass=true in "newest-cni-837172"
	I1219 03:24:24.547634  371990 host.go:66] Checking if "newest-cni-837172" exists ...
	I1219 03:24:24.547832  371990 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:24:24.548129  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:24.552104  371990 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:24:24.552127  371990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:24:24.552183  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:24.578893  371990 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:24:24.579252  371990 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:24:24.579323  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:24.583084  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:24.603726  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:24.615978  371990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1219 03:24:24.668369  371990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:24:24.704139  371990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:24:24.719590  371990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:24:24.803320  371990 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1219 03:24:24.805437  371990 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:24:24.805497  371990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:24:25.029229  371990 api_server.go:72] duration metric: took 506.215716ms to wait for apiserver process to appear ...
	I1219 03:24:25.029261  371990 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:24:25.029282  371990 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:24:25.034829  371990 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1219 03:24:25.035777  371990 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 03:24:25.035813  371990 api_server.go:131] duration metric: took 6.544499ms to wait for apiserver health ...
	I1219 03:24:25.035828  371990 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:24:25.038607  371990 system_pods.go:59] 8 kube-system pods found
	I1219 03:24:25.038639  371990 system_pods.go:61] "coredns-7d764666f9-ckc9j" [5bc3e758-2623-4eae-87fe-a58b932c9e87] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1219 03:24:25.038651  371990 system_pods.go:61] "etcd-newest-cni-837172" [59f28fae-3605-487b-a1b8-c3851c47abac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:24:25.038659  371990 system_pods.go:61] "kindnet-846n4" [b45c7fbd-085c-4972-b312-0973aab68ddc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:24:25.038670  371990 system_pods.go:61] "kube-apiserver-newest-cni-837172" [8d92900e-716d-42ad-9d88-1ca6d0ddf5c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:24:25.038678  371990 system_pods.go:61] "kube-controller-manager-newest-cni-837172" [46b3ad5a-64d1-4e1f-8bdf-ce613dcd6348] Running
	I1219 03:24:25.038684  371990 system_pods.go:61] "kube-proxy-6wg2n" [356cd689-df37-49ac-a3f2-1931978ccf64] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:24:25.038690  371990 system_pods.go:61] "kube-scheduler-newest-cni-837172" [da065d09-cc65-42e7-8e0d-9f9709cafaf9] Running
	I1219 03:24:25.038695  371990 system_pods.go:61] "storage-provisioner" [ba402c27-5828-489f-a656-bc0ef2e8f05e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1219 03:24:25.038713  371990 system_pods.go:74] duration metric: took 2.880877ms to wait for pod list to return data ...
	I1219 03:24:25.038720  371990 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:24:25.038969  371990 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1219 03:24:25.040226  371990 addons.go:546] duration metric: took 517.179033ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1219 03:24:25.040990  371990 default_sa.go:45] found service account: "default"
	I1219 03:24:25.041006  371990 default_sa.go:55] duration metric: took 2.27792ms for default service account to be created ...
	I1219 03:24:25.041015  371990 kubeadm.go:587] duration metric: took 518.007856ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:24:25.041030  371990 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:24:25.043438  371990 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:24:25.043465  371990 node_conditions.go:123] node cpu capacity is 8
	I1219 03:24:25.043494  371990 node_conditions.go:105] duration metric: took 2.45952ms to run NodePressure ...
	I1219 03:24:25.043503  371990 start.go:242] waiting for startup goroutines ...
	I1219 03:24:25.308179  371990 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-837172" context rescaled to 1 replicas
	I1219 03:24:25.308227  371990 start.go:247] waiting for cluster config update ...
	I1219 03:24:25.308241  371990 start.go:256] writing updated cluster config ...
	I1219 03:24:25.308502  371990 ssh_runner.go:195] Run: rm -f paused
	I1219 03:24:25.358553  371990 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1219 03:24:25.360429  371990 out.go:179] * Done! kubectl is now configured to use "newest-cni-837172" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 03:24:25 newest-cni-837172 crio[780]: time="2025-12-19T03:24:25.132597688Z" level=info msg="Ran pod sandbox 59c78b10eb5169e9137ea0da82b92feda29abd8df4cd0a19b5ac0483a3010bf4 with infra container: kube-system/kindnet-846n4/POD" id=120766c1-45cf-4b4a-bec8-20d72dee91cd name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 19 03:24:25 newest-cni-837172 crio[780]: time="2025-12-19T03:24:25.133260484Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=116bbd99-f592-45f6-a835-1f72cefcfedc name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:24:25 newest-cni-837172 crio[780]: time="2025-12-19T03:24:25.133430674Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=0bf5f2c1-d3cd-479c-801c-217e022e9c9c name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:24:25 newest-cni-837172 crio[780]: time="2025-12-19T03:24:25.133564472Z" level=info msg="Image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 not found" id=0bf5f2c1-d3cd-479c-801c-217e022e9c9c name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:24:25 newest-cni-837172 crio[780]: time="2025-12-19T03:24:25.133608068Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 found" id=0bf5f2c1-d3cd-479c-801c-217e022e9c9c name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:24:25 newest-cni-837172 crio[780]: time="2025-12-19T03:24:25.134198795Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=6c16425c-fa83-41da-9052-fa4247aea7e7 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:24:25 newest-cni-837172 crio[780]: time="2025-12-19T03:24:25.134548037Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=73b36263-d64a-4ba0-998d-45718b0cc225 name=/runtime.v1.ImageService/PullImage
	Dec 19 03:24:25 newest-cni-837172 crio[780]: time="2025-12-19T03:24:25.137860191Z" level=info msg="Creating container: kube-system/kube-proxy-6wg2n/kube-proxy" id=f9d6c06a-844e-414c-bd89-6f2f933771d4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:24:25 newest-cni-837172 crio[780]: time="2025-12-19T03:24:25.137984803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:24:25 newest-cni-837172 crio[780]: time="2025-12-19T03:24:25.14225882Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Dec 19 03:24:25 newest-cni-837172 crio[780]: time="2025-12-19T03:24:25.14319416Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:24:25 newest-cni-837172 crio[780]: time="2025-12-19T03:24:25.14364582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:24:25 newest-cni-837172 crio[780]: time="2025-12-19T03:24:25.182990917Z" level=info msg="Created container 5ccd489be0d348bb1ee441a3fa51da1e0aea901ea988f1d40475f9fdc20cada2: kube-system/kube-proxy-6wg2n/kube-proxy" id=f9d6c06a-844e-414c-bd89-6f2f933771d4 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:24:25 newest-cni-837172 crio[780]: time="2025-12-19T03:24:25.183645306Z" level=info msg="Starting container: 5ccd489be0d348bb1ee441a3fa51da1e0aea901ea988f1d40475f9fdc20cada2" id=cc360b43-3b65-43c7-8049-2e5185b213bb name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:24:25 newest-cni-837172 crio[780]: time="2025-12-19T03:24:25.186689646Z" level=info msg="Started container" PID=1593 containerID=5ccd489be0d348bb1ee441a3fa51da1e0aea901ea988f1d40475f9fdc20cada2 description=kube-system/kube-proxy-6wg2n/kube-proxy id=cc360b43-3b65-43c7-8049-2e5185b213bb name=/runtime.v1.RuntimeService/StartContainer sandboxID=34c0aaefc4b82826ead4aff528a203ac680a9ee75361b65faff490b9c82625f4
	Dec 19 03:24:26 newest-cni-837172 crio[780]: time="2025-12-19T03:24:26.369456402Z" level=info msg="Pulled image: docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27" id=73b36263-d64a-4ba0-998d-45718b0cc225 name=/runtime.v1.ImageService/PullImage
	Dec 19 03:24:26 newest-cni-837172 crio[780]: time="2025-12-19T03:24:26.370121602Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=39cfb543-d9dc-4704-9a6a-516bd169be70 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:24:26 newest-cni-837172 crio[780]: time="2025-12-19T03:24:26.371991904Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=6389d76e-85c0-4cda-bc97-39da29f3def3 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:24:26 newest-cni-837172 crio[780]: time="2025-12-19T03:24:26.375366254Z" level=info msg="Creating container: kube-system/kindnet-846n4/kindnet-cni" id=47037c7f-9555-409e-9e6e-78ea64020c7d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:24:26 newest-cni-837172 crio[780]: time="2025-12-19T03:24:26.375456786Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:24:26 newest-cni-837172 crio[780]: time="2025-12-19T03:24:26.379457235Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:24:26 newest-cni-837172 crio[780]: time="2025-12-19T03:24:26.380037457Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:24:26 newest-cni-837172 crio[780]: time="2025-12-19T03:24:26.415759336Z" level=info msg="Created container 92568bdd78bc8965f8f7cb43d5b779ef96691de5a20506a4e23ea03f32783f6a: kube-system/kindnet-846n4/kindnet-cni" id=47037c7f-9555-409e-9e6e-78ea64020c7d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:24:26 newest-cni-837172 crio[780]: time="2025-12-19T03:24:26.416325053Z" level=info msg="Starting container: 92568bdd78bc8965f8f7cb43d5b779ef96691de5a20506a4e23ea03f32783f6a" id=918c81a6-444e-49b2-97c0-3192ba9ccc77 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:24:26 newest-cni-837172 crio[780]: time="2025-12-19T03:24:26.418105502Z" level=info msg="Started container" PID=1848 containerID=92568bdd78bc8965f8f7cb43d5b779ef96691de5a20506a4e23ea03f32783f6a description=kube-system/kindnet-846n4/kindnet-cni id=918c81a6-444e-49b2-97c0-3192ba9ccc77 name=/runtime.v1.RuntimeService/StartContainer sandboxID=59c78b10eb5169e9137ea0da82b92feda29abd8df4cd0a19b5ac0483a3010bf4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	92568bdd78bc8       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   Less than a second ago   Running             kindnet-cni               0                   59c78b10eb516       kindnet-846n4                               kube-system
	5ccd489be0d34       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                     1 second ago             Running             kube-proxy                0                   34c0aaefc4b82       kube-proxy-6wg2n                            kube-system
	ce7ee90a49984       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                     11 seconds ago           Running             etcd                      0                   ecbb653d0bb58       etcd-newest-cni-837172                      kube-system
	c550ecd840ccf       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                     11 seconds ago           Running             kube-apiserver            0                   858d1d9fc4a44       kube-apiserver-newest-cni-837172            kube-system
	84a10d3369f5b       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                     11 seconds ago           Running             kube-scheduler            0                   80dd52ded779f       kube-scheduler-newest-cni-837172            kube-system
	2b3294419160c       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                     11 seconds ago           Running             kube-controller-manager   0                   47e4e3be55ddc       kube-controller-manager-newest-cni-837172   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-837172
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-837172
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=newest-cni-837172
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_24_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:24:16 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-837172
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:24:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:24:19 +0000   Fri, 19 Dec 2025 03:24:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:24:19 +0000   Fri, 19 Dec 2025 03:24:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:24:19 +0000   Fri, 19 Dec 2025 03:24:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 19 Dec 2025 03:24:19 +0000   Fri, 19 Dec 2025 03:24:15 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-837172
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                89c49ec5-bdd2-4caa-8f8e-fdb6f1a61d8d
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-837172                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7s
	  kube-system                 kindnet-846n4                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-837172             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-837172    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-6wg2n                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-837172             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-837172 event: Registered Node newest-cni-837172 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [ce7ee90a49984792109df5e106d9972ef7b58e8c1f0f2ffff01a0d09176d77b0] <==
	{"level":"info","ts":"2025-12-19T03:24:15.630623Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-19T03:24:15.821747Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-19T03:24:15.821813Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-19T03:24:15.821867Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-12-19T03:24:15.821882Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:24:15.821901Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-12-19T03:24:15.822642Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-19T03:24:15.822676Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:24:15.822714Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-12-19T03:24:15.822722Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-19T03:24:15.823489Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-837172 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-19T03:24:15.823528Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:24:15.823610Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:24:15.823651Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-19T03:24:15.823888Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T03:24:15.823943Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T03:24:15.824293Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-19T03:24:15.824396Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-19T03:24:15.824921Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-19T03:24:15.825018Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-19T03:24:15.825142Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:24:15.825180Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:24:15.825208Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-19T03:24:15.828145Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-12-19T03:24:15.828195Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 03:24:26 up  1:06,  0 user,  load average: 1.82, 0.84, 1.22
	Linux newest-cni-837172 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [92568bdd78bc8965f8f7cb43d5b779ef96691de5a20506a4e23ea03f32783f6a] <==
	I1219 03:24:26.580167       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1219 03:24:26.674817       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1219 03:24:26.675002       1 main.go:148] setting mtu 1500 for CNI 
	I1219 03:24:26.675033       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 03:24:26.675071       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T03:24:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 03:24:26.876449       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 03:24:26.876523       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 03:24:26.876879       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 03:24:26.877458       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [c550ecd840ccffa519013fecf6c316c2f065037c75bf30c8abdf90f169d289ef] <==
	I1219 03:24:16.869669       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:16.869692       1 policy_source.go:248] refreshing policies
	E1219 03:24:16.920505       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1219 03:24:16.968390       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 03:24:16.972407       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1219 03:24:16.972443       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:24:16.985393       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:24:17.063146       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1219 03:24:17.771196       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1219 03:24:17.775364       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1219 03:24:17.775381       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1219 03:24:18.218560       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:24:18.255790       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:24:18.375064       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1219 03:24:18.380806       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1219 03:24:18.381802       1 controller.go:667] quota admission added evaluator for: endpoints
	I1219 03:24:18.385508       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:24:18.793042       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1219 03:24:19.541364       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 03:24:19.550519       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1219 03:24:19.558509       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1219 03:24:24.395929       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:24:24.447522       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:24:24.450943       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:24:24.798112       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [2b3294419160c1bcfc15cdf95857bad9696b19b0e7bbe642589c92bb4bac3463] <==
	I1219 03:24:23.604805       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:23.605101       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:23.605127       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:23.605235       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:23.603608       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:23.605106       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:23.605420       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:23.605497       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:23.605549       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:23.605660       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:23.605673       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:23.605662       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:23.605886       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:23.606018       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:23.603845       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:23.607491       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:23.607937       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:24:23.608201       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:23.608249       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:23.613177       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:23.615459       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-837172" podCIDRs=["10.42.0.0/24"]
	I1219 03:24:23.704347       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:23.704365       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1219 03:24:23.704372       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1219 03:24:23.708652       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [5ccd489be0d348bb1ee441a3fa51da1e0aea901ea988f1d40475f9fdc20cada2] <==
	I1219 03:24:25.222826       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:24:25.279375       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:24:25.380340       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:25.380402       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1219 03:24:25.380527       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:24:25.411388       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:24:25.411555       1 server_linux.go:136] "Using iptables Proxier"
	I1219 03:24:25.417782       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:24:25.418310       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 03:24:25.418331       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:24:25.419984       1 config.go:200] "Starting service config controller"
	I1219 03:24:25.420010       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:24:25.420151       1 config.go:309] "Starting node config controller"
	I1219 03:24:25.420167       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:24:25.420176       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:24:25.420198       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:24:25.420205       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:24:25.420232       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:24:25.420261       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:24:25.520276       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:24:25.520284       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:24:25.520334       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [84a10d3369f5ba2e5aeb0a12f5f2f6f7e8b86c2e3c116820f29fc6028911cd72] <==
	E1219 03:24:16.819004       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1219 03:24:16.819246       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1219 03:24:16.819961       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1219 03:24:16.819996       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1219 03:24:16.821462       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1219 03:24:16.821492       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1219 03:24:16.821505       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1219 03:24:16.821541       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1219 03:24:16.821579       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1219 03:24:16.821586       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1219 03:24:16.821635       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1219 03:24:16.821754       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1219 03:24:16.821813       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1219 03:24:16.821839       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1219 03:24:16.821923       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1219 03:24:17.646196       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1219 03:24:17.646196       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1219 03:24:17.661371       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1219 03:24:17.661535       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1219 03:24:17.703335       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1219 03:24:17.711957       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1219 03:24:17.774611       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1219 03:24:17.817229       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1219 03:24:18.130001       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1219 03:24:20.914784       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 19 03:24:20 newest-cni-837172 kubelet[1312]: E1219 03:24:20.406550    1312 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-837172" containerName="kube-apiserver"
	Dec 19 03:24:20 newest-cni-837172 kubelet[1312]: E1219 03:24:20.406953    1312 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-837172\" already exists" pod="kube-system/etcd-newest-cni-837172"
	Dec 19 03:24:20 newest-cni-837172 kubelet[1312]: E1219 03:24:20.407019    1312 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-837172" containerName="etcd"
	Dec 19 03:24:21 newest-cni-837172 kubelet[1312]: I1219 03:24:21.020674    1312 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-837172" podStartSLOduration=3.020629387 podStartE2EDuration="3.020629387s" podCreationTimestamp="2025-12-19 03:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 03:24:21.020522583 +0000 UTC m=+1.727338358" watchObservedRunningTime="2025-12-19 03:24:21.020629387 +0000 UTC m=+1.727445163"
	Dec 19 03:24:21 newest-cni-837172 kubelet[1312]: I1219 03:24:21.029404    1312 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-837172" podStartSLOduration=2.029389958 podStartE2EDuration="2.029389958s" podCreationTimestamp="2025-12-19 03:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 03:24:21.029276242 +0000 UTC m=+1.736092020" watchObservedRunningTime="2025-12-19 03:24:21.029389958 +0000 UTC m=+1.736205733"
	Dec 19 03:24:21 newest-cni-837172 kubelet[1312]: I1219 03:24:21.036809    1312 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-837172" podStartSLOduration=3.036778144 podStartE2EDuration="3.036778144s" podCreationTimestamp="2025-12-19 03:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 03:24:21.036440469 +0000 UTC m=+1.743256251" watchObservedRunningTime="2025-12-19 03:24:21.036778144 +0000 UTC m=+1.743593919"
	Dec 19 03:24:21 newest-cni-837172 kubelet[1312]: I1219 03:24:21.044524    1312 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-837172" podStartSLOduration=2.044511491 podStartE2EDuration="2.044511491s" podCreationTimestamp="2025-12-19 03:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 03:24:21.044486447 +0000 UTC m=+1.751302222" watchObservedRunningTime="2025-12-19 03:24:21.044511491 +0000 UTC m=+1.751327263"
	Dec 19 03:24:21 newest-cni-837172 kubelet[1312]: E1219 03:24:21.397406    1312 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-837172" containerName="kube-apiserver"
	Dec 19 03:24:21 newest-cni-837172 kubelet[1312]: E1219 03:24:21.397503    1312 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-837172" containerName="kube-controller-manager"
	Dec 19 03:24:21 newest-cni-837172 kubelet[1312]: E1219 03:24:21.397581    1312 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-837172" containerName="etcd"
	Dec 19 03:24:21 newest-cni-837172 kubelet[1312]: E1219 03:24:21.397791    1312 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-837172" containerName="kube-scheduler"
	Dec 19 03:24:22 newest-cni-837172 kubelet[1312]: E1219 03:24:22.398897    1312 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-837172" containerName="etcd"
	Dec 19 03:24:22 newest-cni-837172 kubelet[1312]: E1219 03:24:22.399016    1312 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-837172" containerName="kube-scheduler"
	Dec 19 03:24:22 newest-cni-837172 kubelet[1312]: E1219 03:24:22.916483    1312 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-837172" containerName="kube-apiserver"
	Dec 19 03:24:23 newest-cni-837172 kubelet[1312]: I1219 03:24:23.692611    1312 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 19 03:24:23 newest-cni-837172 kubelet[1312]: I1219 03:24:23.693330    1312 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 19 03:24:24 newest-cni-837172 kubelet[1312]: I1219 03:24:24.907225    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhn97\" (UniqueName: \"kubernetes.io/projected/356cd689-df37-49ac-a3f2-1931978ccf64-kube-api-access-jhn97\") pod \"kube-proxy-6wg2n\" (UID: \"356cd689-df37-49ac-a3f2-1931978ccf64\") " pod="kube-system/kube-proxy-6wg2n"
	Dec 19 03:24:24 newest-cni-837172 kubelet[1312]: I1219 03:24:24.907282    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b45c7fbd-085c-4972-b312-0973aab68ddc-lib-modules\") pod \"kindnet-846n4\" (UID: \"b45c7fbd-085c-4972-b312-0973aab68ddc\") " pod="kube-system/kindnet-846n4"
	Dec 19 03:24:24 newest-cni-837172 kubelet[1312]: I1219 03:24:24.907312    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp2gw\" (UniqueName: \"kubernetes.io/projected/b45c7fbd-085c-4972-b312-0973aab68ddc-kube-api-access-mp2gw\") pod \"kindnet-846n4\" (UID: \"b45c7fbd-085c-4972-b312-0973aab68ddc\") " pod="kube-system/kindnet-846n4"
	Dec 19 03:24:24 newest-cni-837172 kubelet[1312]: I1219 03:24:24.907339    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/356cd689-df37-49ac-a3f2-1931978ccf64-xtables-lock\") pod \"kube-proxy-6wg2n\" (UID: \"356cd689-df37-49ac-a3f2-1931978ccf64\") " pod="kube-system/kube-proxy-6wg2n"
	Dec 19 03:24:24 newest-cni-837172 kubelet[1312]: I1219 03:24:24.907429    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b45c7fbd-085c-4972-b312-0973aab68ddc-cni-cfg\") pod \"kindnet-846n4\" (UID: \"b45c7fbd-085c-4972-b312-0973aab68ddc\") " pod="kube-system/kindnet-846n4"
	Dec 19 03:24:24 newest-cni-837172 kubelet[1312]: I1219 03:24:24.907488    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/356cd689-df37-49ac-a3f2-1931978ccf64-kube-proxy\") pod \"kube-proxy-6wg2n\" (UID: \"356cd689-df37-49ac-a3f2-1931978ccf64\") " pod="kube-system/kube-proxy-6wg2n"
	Dec 19 03:24:24 newest-cni-837172 kubelet[1312]: I1219 03:24:24.907531    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b45c7fbd-085c-4972-b312-0973aab68ddc-xtables-lock\") pod \"kindnet-846n4\" (UID: \"b45c7fbd-085c-4972-b312-0973aab68ddc\") " pod="kube-system/kindnet-846n4"
	Dec 19 03:24:24 newest-cni-837172 kubelet[1312]: I1219 03:24:24.907568    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/356cd689-df37-49ac-a3f2-1931978ccf64-lib-modules\") pod \"kube-proxy-6wg2n\" (UID: \"356cd689-df37-49ac-a3f2-1931978ccf64\") " pod="kube-system/kube-proxy-6wg2n"
	Dec 19 03:24:25 newest-cni-837172 kubelet[1312]: I1219 03:24:25.425080    1312 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-6wg2n" podStartSLOduration=1.425062209 podStartE2EDuration="1.425062209s" podCreationTimestamp="2025-12-19 03:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 03:24:25.424640235 +0000 UTC m=+6.131456022" watchObservedRunningTime="2025-12-19 03:24:25.425062209 +0000 UTC m=+6.131877990"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-837172 -n newest-cni-837172
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-837172 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-ckc9j storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-837172 describe pod coredns-7d764666f9-ckc9j storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-837172 describe pod coredns-7d764666f9-ckc9j storage-provisioner: exit status 1 (56.291582ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-ckc9j" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-837172 describe pod coredns-7d764666f9-ckc9j storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-805185 --alsologtostderr -v=1
E1219 03:24:38.456245    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-805185 --alsologtostderr -v=1: exit status 80 (1.775904563s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-805185 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 03:24:37.235216  377353 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:24:37.235451  377353 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:37.235459  377353 out.go:374] Setting ErrFile to fd 2...
	I1219 03:24:37.235463  377353 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:37.235684  377353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:24:37.235930  377353 out.go:368] Setting JSON to false
	I1219 03:24:37.235948  377353 mustload.go:66] Loading cluster: embed-certs-805185
	I1219 03:24:37.236284  377353 config.go:182] Loaded profile config "embed-certs-805185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:24:37.236667  377353 cli_runner.go:164] Run: docker container inspect embed-certs-805185 --format={{.State.Status}}
	I1219 03:24:37.255294  377353 host.go:66] Checking if "embed-certs-805185" exists ...
	I1219 03:24:37.255651  377353 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:37.309746  377353 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-19 03:24:37.300012851 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:37.310427  377353 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765965980-22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765965980-22186-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-805185 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1219 03:24:37.312321  377353 out.go:179] * Pausing node embed-certs-805185 ... 
	I1219 03:24:37.313506  377353 host.go:66] Checking if "embed-certs-805185" exists ...
	I1219 03:24:37.313806  377353 ssh_runner.go:195] Run: systemctl --version
	I1219 03:24:37.313855  377353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-805185
	I1219 03:24:37.331840  377353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/embed-certs-805185/id_rsa Username:docker}
	I1219 03:24:37.432761  377353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:24:37.445714  377353 pause.go:52] kubelet running: true
	I1219 03:24:37.445790  377353 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1219 03:24:37.632636  377353 cri.go:57] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1219 03:24:37.632776  377353 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1219 03:24:37.700542  377353 cri.go:92] found id: "3d7dd245b233f1e33bd4f191102b08c020335fd06eb801c42d94c75e88488904"
	I1219 03:24:37.700573  377353 cri.go:92] found id: "37fd60f84cab5a40d06b06eda266df17eadd8d0a9ee56f7b235782087ec0083a"
	I1219 03:24:37.700579  377353 cri.go:92] found id: "3e6a9f16432bb2d0f57c9e657b776eaae753f9a9bc474bcd825b022f2cf4726b"
	I1219 03:24:37.700584  377353 cri.go:92] found id: "3df3cb787711062528813854036926a57363b917a00c81ef68c3b8c0a675cfa2"
	I1219 03:24:37.700589  377353 cri.go:92] found id: "9734264bc03165e973381a11181db3d0d85532eb608a1d648d545affcc0f5657"
	I1219 03:24:37.700594  377353 cri.go:92] found id: "dca8f84f406b7acd8227404694ece4fd29d232591939f26e4325c52e7c00de60"
	I1219 03:24:37.700599  377353 cri.go:92] found id: "c0e9c22a2523807e95fb727795c040c95c5bd029feb66a6a92f7087e4503774e"
	I1219 03:24:37.700603  377353 cri.go:92] found id: "e4f794af7924e48700f3eb1f53c1070c15bc99d17539d5f097c1a7c62dded81f"
	I1219 03:24:37.700607  377353 cri.go:92] found id: "fa9a88171fdc75e01df96259a9096dab5e5ab76217553f36b6a9922f9e0f06fe"
	I1219 03:24:37.700614  377353 cri.go:92] found id: "20beadfa950bfa82589018450b7e9a01380f66f6a9eae32a9ce629a265cd5ad2"
	I1219 03:24:37.700616  377353 cri.go:92] found id: "d14c5a7b642f85cbc69aef96f7b439f2c6a873edd84fe53cafdbf19ba613e885"
	I1219 03:24:37.700619  377353 cri.go:92] found id: "a0449cd05686367a0a816405c686858df4a264fbcacf43407705baff34ccbc5a"
	I1219 03:24:37.700622  377353 cri.go:92] found id: "95cc887c80866d0ea33ef79f7654625e51e2590ee08a32fae89a8d46347f529a"
	I1219 03:24:37.700624  377353 cri.go:92] found id: "310b39bacccabe01a7800d05d30675f93096703212a17f66095da8c1865d22d2"
	I1219 03:24:37.700627  377353 cri.go:92] found id: "5b4f7811505964d9e14b039acff4c61a760a6112e63bfff6242995499ee3b049"
	I1219 03:24:37.700635  377353 cri.go:92] found id: ""
	I1219 03:24:37.700679  377353 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 03:24:37.713000  377353 retry.go:31] will retry after 343.430798ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:24:37Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:24:38.056657  377353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:24:38.070426  377353 pause.go:52] kubelet running: false
	I1219 03:24:38.070490  377353 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1219 03:24:38.228260  377353 cri.go:57] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1219 03:24:38.228336  377353 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1219 03:24:38.298021  377353 cri.go:92] found id: "3d7dd245b233f1e33bd4f191102b08c020335fd06eb801c42d94c75e88488904"
	I1219 03:24:38.298044  377353 cri.go:92] found id: "37fd60f84cab5a40d06b06eda266df17eadd8d0a9ee56f7b235782087ec0083a"
	I1219 03:24:38.298048  377353 cri.go:92] found id: "3e6a9f16432bb2d0f57c9e657b776eaae753f9a9bc474bcd825b022f2cf4726b"
	I1219 03:24:38.298051  377353 cri.go:92] found id: "3df3cb787711062528813854036926a57363b917a00c81ef68c3b8c0a675cfa2"
	I1219 03:24:38.298054  377353 cri.go:92] found id: "9734264bc03165e973381a11181db3d0d85532eb608a1d648d545affcc0f5657"
	I1219 03:24:38.298058  377353 cri.go:92] found id: "dca8f84f406b7acd8227404694ece4fd29d232591939f26e4325c52e7c00de60"
	I1219 03:24:38.298060  377353 cri.go:92] found id: "c0e9c22a2523807e95fb727795c040c95c5bd029feb66a6a92f7087e4503774e"
	I1219 03:24:38.298063  377353 cri.go:92] found id: "e4f794af7924e48700f3eb1f53c1070c15bc99d17539d5f097c1a7c62dded81f"
	I1219 03:24:38.298065  377353 cri.go:92] found id: "fa9a88171fdc75e01df96259a9096dab5e5ab76217553f36b6a9922f9e0f06fe"
	I1219 03:24:38.298071  377353 cri.go:92] found id: "20beadfa950bfa82589018450b7e9a01380f66f6a9eae32a9ce629a265cd5ad2"
	I1219 03:24:38.298074  377353 cri.go:92] found id: "d14c5a7b642f85cbc69aef96f7b439f2c6a873edd84fe53cafdbf19ba613e885"
	I1219 03:24:38.298077  377353 cri.go:92] found id: "a0449cd05686367a0a816405c686858df4a264fbcacf43407705baff34ccbc5a"
	I1219 03:24:38.298079  377353 cri.go:92] found id: "95cc887c80866d0ea33ef79f7654625e51e2590ee08a32fae89a8d46347f529a"
	I1219 03:24:38.298082  377353 cri.go:92] found id: "310b39bacccabe01a7800d05d30675f93096703212a17f66095da8c1865d22d2"
	I1219 03:24:38.298084  377353 cri.go:92] found id: "5b4f7811505964d9e14b039acff4c61a760a6112e63bfff6242995499ee3b049"
	I1219 03:24:38.298089  377353 cri.go:92] found id: ""
	I1219 03:24:38.298127  377353 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 03:24:38.310841  377353 retry.go:31] will retry after 378.301122ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:24:38Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:24:38.689408  377353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:24:38.703196  377353 pause.go:52] kubelet running: false
	I1219 03:24:38.703264  377353 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1219 03:24:38.863482  377353 cri.go:57] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1219 03:24:38.863565  377353 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1219 03:24:38.930415  377353 cri.go:92] found id: "3d7dd245b233f1e33bd4f191102b08c020335fd06eb801c42d94c75e88488904"
	I1219 03:24:38.930439  377353 cri.go:92] found id: "37fd60f84cab5a40d06b06eda266df17eadd8d0a9ee56f7b235782087ec0083a"
	I1219 03:24:38.930443  377353 cri.go:92] found id: "3e6a9f16432bb2d0f57c9e657b776eaae753f9a9bc474bcd825b022f2cf4726b"
	I1219 03:24:38.930446  377353 cri.go:92] found id: "3df3cb787711062528813854036926a57363b917a00c81ef68c3b8c0a675cfa2"
	I1219 03:24:38.930449  377353 cri.go:92] found id: "9734264bc03165e973381a11181db3d0d85532eb608a1d648d545affcc0f5657"
	I1219 03:24:38.930453  377353 cri.go:92] found id: "dca8f84f406b7acd8227404694ece4fd29d232591939f26e4325c52e7c00de60"
	I1219 03:24:38.930455  377353 cri.go:92] found id: "c0e9c22a2523807e95fb727795c040c95c5bd029feb66a6a92f7087e4503774e"
	I1219 03:24:38.930458  377353 cri.go:92] found id: "e4f794af7924e48700f3eb1f53c1070c15bc99d17539d5f097c1a7c62dded81f"
	I1219 03:24:38.930460  377353 cri.go:92] found id: "fa9a88171fdc75e01df96259a9096dab5e5ab76217553f36b6a9922f9e0f06fe"
	I1219 03:24:38.930465  377353 cri.go:92] found id: "20beadfa950bfa82589018450b7e9a01380f66f6a9eae32a9ce629a265cd5ad2"
	I1219 03:24:38.930468  377353 cri.go:92] found id: "d14c5a7b642f85cbc69aef96f7b439f2c6a873edd84fe53cafdbf19ba613e885"
	I1219 03:24:38.930471  377353 cri.go:92] found id: "a0449cd05686367a0a816405c686858df4a264fbcacf43407705baff34ccbc5a"
	I1219 03:24:38.930473  377353 cri.go:92] found id: "95cc887c80866d0ea33ef79f7654625e51e2590ee08a32fae89a8d46347f529a"
	I1219 03:24:38.930476  377353 cri.go:92] found id: "310b39bacccabe01a7800d05d30675f93096703212a17f66095da8c1865d22d2"
	I1219 03:24:38.930480  377353 cri.go:92] found id: "5b4f7811505964d9e14b039acff4c61a760a6112e63bfff6242995499ee3b049"
	I1219 03:24:38.930485  377353 cri.go:92] found id: ""
	I1219 03:24:38.930532  377353 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 03:24:38.944578  377353 out.go:203] 
	W1219 03:24:38.945925  377353 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:24:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:24:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 03:24:38.945944  377353 out.go:285] * 
	* 
	W1219 03:24:38.949949  377353 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 03:24:38.951247  377353 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-805185 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-805185
helpers_test.go:244: (dbg) docker inspect embed-certs-805185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415",
	        "Created": "2025-12-19T03:04:41.634228453Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 350292,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:05:45.883197161Z",
	            "FinishedAt": "2025-12-19T03:05:44.649106592Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415/hostname",
	        "HostsPath": "/var/lib/docker/containers/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415/hosts",
	        "LogPath": "/var/lib/docker/containers/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415-json.log",
	        "Name": "/embed-certs-805185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-805185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-805185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415",
	                "LowerDir": "/var/lib/docker/overlay2/ed796c6e9d2f9edb740649c569172aeee5d6bffb367753798bf2544f5c8616e4-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ed796c6e9d2f9edb740649c569172aeee5d6bffb367753798bf2544f5c8616e4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ed796c6e9d2f9edb740649c569172aeee5d6bffb367753798bf2544f5c8616e4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ed796c6e9d2f9edb740649c569172aeee5d6bffb367753798bf2544f5c8616e4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-805185",
	                "Source": "/var/lib/docker/volumes/embed-certs-805185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-805185",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-805185",
	                "name.minikube.sigs.k8s.io": "embed-certs-805185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7457f8142accad01c6ab136b22c6fa80ee06dd20e79f2a84f99ffb94723b6308",
	            "SandboxKey": "/var/run/docker/netns/7457f8142acc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-805185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "67670b4143fc2c858529db8e9ece90091b3a7a00c5465943bbbbea83d055a550",
	                    "EndpointID": "a46e3becc7625d5ecd97a1cbfefeda9844ff31ce4ce29ae0c0d5c0cbe2af09be",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "d6:26:96:9c:9e:16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-805185",
	                        "c2b5f77a65ce"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-805185 -n embed-certs-805185
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-805185 -n embed-certs-805185: exit status 2 (325.073636ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-805185 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-805185 logs -n 25: (1.193847278s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable dashboard -p old-k8s-version-433330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-433330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p no-preload-278042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-278042 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p embed-certs-805185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p embed-certs-805185 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-717222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-717222 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-805185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-717222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ image   │ old-k8s-version-433330 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p old-k8s-version-433330 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ image   │ no-preload-278042 image list --format=json                                                                                                                                                                                                         │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p no-preload-278042 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ delete  │ -p old-k8s-version-433330                                                                                                                                                                                                                          │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p old-k8s-version-433330                                                                                                                                                                                                                          │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ start   │ -p newest-cni-837172 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p no-preload-278042                                                                                                                                                                                                                               │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p no-preload-278042                                                                                                                                                                                                                               │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ addons  │ enable metrics-server -p newest-cni-837172 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	│ stop    │ -p newest-cni-837172 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	│ image   │ embed-certs-805185 image list --format=json                                                                                                                                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ pause   │ -p embed-certs-805185 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:24:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:24:01.036023  371990 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:24:01.036565  371990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:01.036582  371990 out.go:374] Setting ErrFile to fd 2...
	I1219 03:24:01.036589  371990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:01.037114  371990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:24:01.038234  371990 out.go:368] Setting JSON to false
	I1219 03:24:01.039510  371990 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3992,"bootTime":1766110649,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:24:01.039592  371990 start.go:143] virtualization: kvm guest
	I1219 03:24:01.041656  371990 out.go:179] * [newest-cni-837172] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:24:01.043211  371990 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:24:01.043253  371990 notify.go:221] Checking for updates...
	I1219 03:24:01.045604  371990 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:24:01.046873  371990 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:24:01.047985  371990 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:24:01.052214  371990 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:24:01.053413  371990 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:24:01.055079  371990 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:24:01.055198  371990 config.go:182] Loaded profile config "embed-certs-805185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:24:01.055324  371990 config.go:182] Loaded profile config "no-preload-278042": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:01.055430  371990 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:24:01.080518  371990 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:24:01.080672  371990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:01.143010  371990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-19 03:24:01.132535066 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:01.143105  371990 docker.go:319] overlay module found
	I1219 03:24:01.144954  371990 out.go:179] * Using the docker driver based on user configuration
	I1219 03:24:01.146278  371990 start.go:309] selected driver: docker
	I1219 03:24:01.146299  371990 start.go:928] validating driver "docker" against <nil>
	I1219 03:24:01.146315  371990 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:24:01.147198  371990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:01.207023  371990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-19 03:24:01.196664778 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:01.207180  371990 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1219 03:24:01.207207  371990 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1219 03:24:01.207525  371990 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:24:01.209632  371990 out.go:179] * Using Docker driver with root privileges
	I1219 03:24:01.210891  371990 cni.go:84] Creating CNI manager for ""
	I1219 03:24:01.210974  371990 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:24:01.210985  371990 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1219 03:24:01.211049  371990 start.go:353] cluster config:
	{Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:01.212320  371990 out.go:179] * Starting "newest-cni-837172" primary control-plane node in "newest-cni-837172" cluster
	I1219 03:24:01.213422  371990 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:24:01.214779  371990 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:24:01.215953  371990 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:24:01.216006  371990 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1219 03:24:01.216025  371990 cache.go:65] Caching tarball of preloaded images
	I1219 03:24:01.216047  371990 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:24:01.216120  371990 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:24:01.216133  371990 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1219 03:24:01.216218  371990 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/config.json ...
	I1219 03:24:01.216239  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/config.json: {Name:mkf2bb7657c731e279d378a607e1a523b320a47e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:01.237349  371990 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:24:01.237368  371990 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:24:01.237386  371990 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:24:01.237420  371990 start.go:360] acquireMachinesLock for newest-cni-837172: {Name:mk7db0c3e7d0fad2e8a414af15e136d817446719 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:24:01.237512  371990 start.go:364] duration metric: took 75.602µs to acquireMachinesLock for "newest-cni-837172"
	I1219 03:24:01.237534  371990 start.go:93] Provisioning new machine with config: &{Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:24:01.237590  371990 start.go:125] createHost starting for "" (driver="docker")
	I1219 03:24:01.239751  371990 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1219 03:24:01.239974  371990 start.go:159] libmachine.API.Create for "newest-cni-837172" (driver="docker")
	I1219 03:24:01.240017  371990 client.go:173] LocalClient.Create starting
	I1219 03:24:01.240087  371990 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem
	I1219 03:24:01.240117  371990 main.go:144] libmachine: Decoding PEM data...
	I1219 03:24:01.240136  371990 main.go:144] libmachine: Parsing certificate...
	I1219 03:24:01.240185  371990 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem
	I1219 03:24:01.240204  371990 main.go:144] libmachine: Decoding PEM data...
	I1219 03:24:01.240213  371990 main.go:144] libmachine: Parsing certificate...
	I1219 03:24:01.240512  371990 cli_runner.go:164] Run: docker network inspect newest-cni-837172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1219 03:24:01.257883  371990 cli_runner.go:211] docker network inspect newest-cni-837172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1219 03:24:01.258008  371990 network_create.go:284] running [docker network inspect newest-cni-837172] to gather additional debugging logs...
	I1219 03:24:01.258034  371990 cli_runner.go:164] Run: docker network inspect newest-cni-837172
	W1219 03:24:01.275377  371990 cli_runner.go:211] docker network inspect newest-cni-837172 returned with exit code 1
	I1219 03:24:01.275412  371990 network_create.go:287] error running [docker network inspect newest-cni-837172]: docker network inspect newest-cni-837172: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-837172 not found
	I1219 03:24:01.275429  371990 network_create.go:289] output of [docker network inspect newest-cni-837172]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-837172 not found
	
	** /stderr **
	I1219 03:24:01.275535  371990 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:24:01.294388  371990 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d70e62b79a31 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:cf:22:72:cb:a0} reservation:<nil>}
	I1219 03:24:01.295272  371990 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-980aea652065 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ba:dd:9c:97:fb:7d} reservation:<nil>}
	I1219 03:24:01.296258  371990 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-42b42f6a5044 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a2:1e:31:1b:21:84} reservation:<nil>}
	I1219 03:24:01.297569  371990 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec48c0}
	I1219 03:24:01.297599  371990 network_create.go:124] attempt to create docker network newest-cni-837172 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1219 03:24:01.297651  371990 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-837172 newest-cni-837172
	I1219 03:24:01.350655  371990 network_create.go:108] docker network newest-cni-837172 192.168.76.0/24 created
	I1219 03:24:01.350682  371990 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-837172" container
	I1219 03:24:01.350794  371990 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1219 03:24:01.370331  371990 cli_runner.go:164] Run: docker volume create newest-cni-837172 --label name.minikube.sigs.k8s.io=newest-cni-837172 --label created_by.minikube.sigs.k8s.io=true
	I1219 03:24:01.391519  371990 oci.go:103] Successfully created a docker volume newest-cni-837172
	I1219 03:24:01.391624  371990 cli_runner.go:164] Run: docker run --rm --name newest-cni-837172-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-837172 --entrypoint /usr/bin/test -v newest-cni-837172:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1219 03:24:01.840345  371990 oci.go:107] Successfully prepared a docker volume newest-cni-837172
	I1219 03:24:01.840449  371990 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:24:01.840465  371990 kic.go:194] Starting extracting preloaded images to volume ...
	I1219 03:24:01.840529  371990 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-837172:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1219 03:24:05.697885  371990 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-837172:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.857303195s)
	I1219 03:24:05.697924  371990 kic.go:203] duration metric: took 3.857455339s to extract preloaded images to volume ...
	W1219 03:24:05.698024  371990 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1219 03:24:05.698058  371990 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1219 03:24:05.698100  371990 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1219 03:24:05.757547  371990 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-837172 --name newest-cni-837172 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-837172 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-837172 --network newest-cni-837172 --ip 192.168.76.2 --volume newest-cni-837172:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1219 03:24:06.051568  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Running}}
	I1219 03:24:06.072261  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:06.093313  371990 cli_runner.go:164] Run: docker exec newest-cni-837172 stat /var/lib/dpkg/alternatives/iptables
	I1219 03:24:06.144238  371990 oci.go:144] the created container "newest-cni-837172" has a running status.
	I1219 03:24:06.144278  371990 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa...
	I1219 03:24:06.230796  371990 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1219 03:24:06.256299  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:06.273734  371990 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1219 03:24:06.273758  371990 kic_runner.go:114] Args: [docker exec --privileged newest-cni-837172 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1219 03:24:06.341522  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:06.363532  371990 machine.go:94] provisionDockerMachine start ...
	I1219 03:24:06.363655  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:06.390168  371990 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:06.390536  371990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1219 03:24:06.390552  371990 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:24:06.391620  371990 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34054->127.0.0.1:33138: read: connection reset by peer
	I1219 03:24:09.536680  371990 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-837172
	
	I1219 03:24:09.536733  371990 ubuntu.go:182] provisioning hostname "newest-cni-837172"
	I1219 03:24:09.536797  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:09.555045  371990 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:09.555325  371990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1219 03:24:09.555340  371990 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-837172 && echo "newest-cni-837172" | sudo tee /etc/hostname
	I1219 03:24:09.709116  371990 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-837172
	
	I1219 03:24:09.709183  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:09.727847  371990 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:09.728289  371990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1219 03:24:09.728322  371990 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-837172' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-837172/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-837172' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:24:09.871486  371990 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:24:09.871529  371990 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 03:24:09.871588  371990 ubuntu.go:190] setting up certificates
	I1219 03:24:09.871600  371990 provision.go:84] configureAuth start
	I1219 03:24:09.871666  371990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-837172
	I1219 03:24:09.890551  371990 provision.go:143] copyHostCerts
	I1219 03:24:09.890608  371990 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem, removing ...
	I1219 03:24:09.890616  371990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem
	I1219 03:24:09.890710  371990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 03:24:09.890819  371990 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem, removing ...
	I1219 03:24:09.890829  371990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem
	I1219 03:24:09.890867  371990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 03:24:09.890920  371990 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem, removing ...
	I1219 03:24:09.890933  371990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem
	I1219 03:24:09.890959  371990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 03:24:09.891015  371990 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.newest-cni-837172 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-837172]
	I1219 03:24:09.923962  371990 provision.go:177] copyRemoteCerts
	I1219 03:24:09.924021  371990 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:24:09.924055  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:09.943177  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.046012  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:24:10.066001  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1219 03:24:10.083456  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 03:24:10.101464  371990 provision.go:87] duration metric: took 229.847544ms to configureAuth
	I1219 03:24:10.101492  371990 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:24:10.101673  371990 config.go:182] Loaded profile config "newest-cni-837172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:10.101801  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.120532  371990 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:10.120821  371990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1219 03:24:10.120839  371990 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:24:10.410477  371990 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:24:10.410502  371990 machine.go:97] duration metric: took 4.046944113s to provisionDockerMachine
	I1219 03:24:10.410513  371990 client.go:176] duration metric: took 9.170488353s to LocalClient.Create
	I1219 03:24:10.410535  371990 start.go:167] duration metric: took 9.170561433s to libmachine.API.Create "newest-cni-837172"
	I1219 03:24:10.410546  371990 start.go:293] postStartSetup for "newest-cni-837172" (driver="docker")
	I1219 03:24:10.410559  371990 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:24:10.410613  371990 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:24:10.410664  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.430222  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.533641  371990 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:24:10.537745  371990 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:24:10.537783  371990 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:24:10.537806  371990 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 03:24:10.537857  371990 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 03:24:10.537934  371990 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem -> 85362.pem in /etc/ssl/certs
	I1219 03:24:10.538030  371990 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:24:10.545818  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:24:10.566832  371990 start.go:296] duration metric: took 156.272185ms for postStartSetup
	I1219 03:24:10.567244  371990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-837172
	I1219 03:24:10.586641  371990 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/config.json ...
	I1219 03:24:10.586934  371990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:24:10.586987  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.604894  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.703924  371990 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:24:10.708480  371990 start.go:128] duration metric: took 9.470874061s to createHost
	I1219 03:24:10.708519  371990 start.go:83] releasing machines lock for "newest-cni-837172", held for 9.47099552s
	I1219 03:24:10.708596  371990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-837172
	I1219 03:24:10.727823  371990 ssh_runner.go:195] Run: cat /version.json
	I1219 03:24:10.727853  371990 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:24:10.727877  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.727922  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.748155  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.748577  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.899556  371990 ssh_runner.go:195] Run: systemctl --version
	I1219 03:24:10.906157  371990 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:24:10.942010  371990 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:24:10.946776  371990 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:24:10.946834  371990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:24:10.972921  371990 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 03:24:10.972943  371990 start.go:496] detecting cgroup driver to use...
	I1219 03:24:10.972971  371990 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:24:10.973032  371990 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:24:10.989146  371990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:24:11.002203  371990 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:24:11.002282  371990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:24:11.018422  371990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:24:11.035554  371990 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:24:11.119919  371990 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:24:11.207179  371990 docker.go:234] disabling docker service ...
	I1219 03:24:11.207252  371990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:24:11.225572  371990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:24:11.237859  371990 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:24:11.323024  371990 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:24:11.407303  371990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:24:11.419524  371990 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:24:11.433341  371990 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:24:11.433395  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.443408  371990 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 03:24:11.443468  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.452460  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.460889  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.469451  371990 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:24:11.477277  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.485766  371990 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.499106  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.508174  371990 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:24:11.515313  371990 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:24:11.522319  371990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:11.604796  371990 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:24:11.746317  371990 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:24:11.746376  371990 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:24:11.750220  371990 start.go:564] Will wait 60s for crictl version
	I1219 03:24:11.750278  371990 ssh_runner.go:195] Run: which crictl
	I1219 03:24:11.753821  371990 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:24:11.777608  371990 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 03:24:11.777714  371990 ssh_runner.go:195] Run: crio --version
	I1219 03:24:11.804073  371990 ssh_runner.go:195] Run: crio --version
	I1219 03:24:11.833640  371990 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1219 03:24:11.834886  371990 cli_runner.go:164] Run: docker network inspect newest-cni-837172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:24:11.852567  371990 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1219 03:24:11.856667  371990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:24:11.871316  371990 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1219 03:24:11.872497  371990 kubeadm.go:884] updating cluster {Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:24:11.872642  371990 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:24:11.872692  371990 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:24:11.904183  371990 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:24:11.904204  371990 crio.go:433] Images already preloaded, skipping extraction
	I1219 03:24:11.904263  371990 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:24:11.930999  371990 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:24:11.931020  371990 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:24:11.931026  371990 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1219 03:24:11.931148  371990 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-837172 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:24:11.931228  371990 ssh_runner.go:195] Run: crio config
	I1219 03:24:11.976472  371990 cni.go:84] Creating CNI manager for ""
	I1219 03:24:11.976491  371990 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:24:11.976503  371990 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1219 03:24:11.976531  371990 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-837172 NodeName:newest-cni-837172 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:24:11.976658  371990 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-837172"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:24:11.976739  371990 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1219 03:24:11.985021  371990 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:24:11.985080  371990 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:24:11.992859  371990 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1219 03:24:12.006496  371990 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1219 03:24:12.021643  371990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1219 03:24:12.034441  371990 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:24:12.038092  371990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:24:12.047986  371990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:12.128789  371990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:24:12.152988  371990 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172 for IP: 192.168.76.2
	I1219 03:24:12.153016  371990 certs.go:195] generating shared ca certs ...
	I1219 03:24:12.153035  371990 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.153175  371990 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 03:24:12.153220  371990 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 03:24:12.153233  371990 certs.go:257] generating profile certs ...
	I1219 03:24:12.153289  371990 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.key
	I1219 03:24:12.153302  371990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.crt with IP's: []
	I1219 03:24:12.271406  371990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.crt ...
	I1219 03:24:12.271435  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.crt: {Name:mke8fed86df635a05f54420e92870363146991f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.271601  371990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.key ...
	I1219 03:24:12.271612  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.key: {Name:mk39737e3f76352137132fe8060ef391a0d43bc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.271690  371990 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b
	I1219 03:24:12.271717  371990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt.48a05c1b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1219 03:24:12.379475  371990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt.48a05c1b ...
	I1219 03:24:12.379503  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt.48a05c1b: {Name:mkc4d74c8f8c4deb077c8f688d203329a2c5750d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.379662  371990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b ...
	I1219 03:24:12.379675  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b: {Name:mk1b93ad6f4ca843c3104dc76975062dde81eaef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.379761  371990 certs.go:382] copying /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt.48a05c1b -> /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt
	I1219 03:24:12.379853  371990 certs.go:386] copying /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b -> /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key
	I1219 03:24:12.379918  371990 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key
	I1219 03:24:12.379940  371990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt with IP's: []
	I1219 03:24:12.467338  371990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt ...
	I1219 03:24:12.467368  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt: {Name:mk5dc8f653da407b5f14ca799301800eac0952c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.467561  371990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key ...
	I1219 03:24:12.467581  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key: {Name:mk4063cc1af4dbf73c9c390b468c828c35385b5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.467821  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem (1338 bytes)
	W1219 03:24:12.467864  371990 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536_empty.pem, impossibly tiny 0 bytes
	I1219 03:24:12.467875  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:24:12.467901  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:24:12.467925  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:24:12.467953  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 03:24:12.468001  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:24:12.468519  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:24:12.487159  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:24:12.504306  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:24:12.521550  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 03:24:12.538418  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1219 03:24:12.554861  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:24:12.572166  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:24:12.589324  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:24:12.606224  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:24:12.625269  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem --> /usr/share/ca-certificates/8536.pem (1338 bytes)
	I1219 03:24:12.642642  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /usr/share/ca-certificates/85362.pem (1708 bytes)
	I1219 03:24:12.658965  371990 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:24:12.671458  371990 ssh_runner.go:195] Run: openssl version
	I1219 03:24:12.677537  371990 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:12.684496  371990 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:24:12.691660  371990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:12.695495  371990 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:12.695541  371990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:12.730806  371990 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:24:12.738920  371990 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 03:24:12.746295  371990 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8536.pem
	I1219 03:24:12.753462  371990 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8536.pem /etc/ssl/certs/8536.pem
	I1219 03:24:12.760758  371990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8536.pem
	I1219 03:24:12.764356  371990 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:33 /usr/share/ca-certificates/8536.pem
	I1219 03:24:12.764415  371990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8536.pem
	I1219 03:24:12.800484  371990 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:24:12.809192  371990 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8536.pem /etc/ssl/certs/51391683.0
	I1219 03:24:12.816759  371990 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85362.pem
	I1219 03:24:12.825274  371990 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85362.pem /etc/ssl/certs/85362.pem
	I1219 03:24:12.833125  371990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85362.pem
	I1219 03:24:12.836939  371990 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:33 /usr/share/ca-certificates/85362.pem
	I1219 03:24:12.836993  371990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85362.pem
	I1219 03:24:12.871891  371990 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:24:12.879672  371990 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85362.pem /etc/ssl/certs/3ec20f2e.0
	I1219 03:24:12.887040  371990 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:24:12.890648  371990 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1219 03:24:12.890729  371990 kubeadm.go:401] StartCluster: {Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:12.890825  371990 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:24:12.890893  371990 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:24:12.920058  371990 cri.go:92] found id: ""
	I1219 03:24:12.920133  371990 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:24:12.928606  371990 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 03:24:12.936934  371990 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1219 03:24:12.936985  371990 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 03:24:12.945218  371990 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 03:24:12.945240  371990 kubeadm.go:158] found existing configuration files:
	
	I1219 03:24:12.945287  371990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1219 03:24:12.952614  371990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 03:24:12.952666  371990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 03:24:12.960262  371990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1219 03:24:12.967725  371990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 03:24:12.967831  371990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 03:24:12.975015  371990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1219 03:24:12.982506  371990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 03:24:12.982549  371990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 03:24:12.989686  371990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1219 03:24:12.997834  371990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 03:24:12.997888  371990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 03:24:13.005263  371990 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1219 03:24:13.041610  371990 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1219 03:24:13.041730  371990 kubeadm.go:319] [preflight] Running pre-flight checks
	I1219 03:24:13.106822  371990 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1219 03:24:13.106921  371990 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1219 03:24:13.106982  371990 kubeadm.go:319] OS: Linux
	I1219 03:24:13.107046  371990 kubeadm.go:319] CGROUPS_CPU: enabled
	I1219 03:24:13.107146  371990 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1219 03:24:13.107237  371990 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1219 03:24:13.107288  371990 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1219 03:24:13.107344  371990 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1219 03:24:13.107385  371990 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1219 03:24:13.107463  371990 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1219 03:24:13.107538  371990 kubeadm.go:319] CGROUPS_IO: enabled
	I1219 03:24:13.164958  371990 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1219 03:24:13.165152  371990 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1219 03:24:13.165292  371990 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1219 03:24:13.174971  371990 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1219 03:24:13.178028  371990 out.go:252]   - Generating certificates and keys ...
	I1219 03:24:13.178136  371990 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1219 03:24:13.178232  371990 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1219 03:24:13.301903  371990 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1219 03:24:13.387971  371990 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1219 03:24:13.500057  371990 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1219 03:24:13.603458  371990 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1219 03:24:13.636925  371990 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1219 03:24:13.637122  371990 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-837172] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1219 03:24:13.836231  371990 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1219 03:24:13.836371  371990 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-837172] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1219 03:24:14.002346  371990 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1219 03:24:14.032095  371990 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1219 03:24:14.137234  371990 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1219 03:24:14.137362  371990 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1219 03:24:14.167788  371990 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1219 03:24:14.256296  371990 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1219 03:24:14.335846  371990 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1219 03:24:14.409462  371990 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1219 03:24:14.592839  371990 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1219 03:24:14.593412  371990 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1219 03:24:14.597164  371990 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1219 03:24:14.598823  371990 out.go:252]   - Booting up control plane ...
	I1219 03:24:14.598951  371990 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1219 03:24:14.599066  371990 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1219 03:24:14.599695  371990 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1219 03:24:14.613628  371990 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1219 03:24:14.613794  371990 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1219 03:24:14.621414  371990 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1219 03:24:14.621682  371990 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1219 03:24:14.621767  371990 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1219 03:24:14.720948  371990 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1219 03:24:14.721103  371990 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1219 03:24:15.222675  371990 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.8355ms
	I1219 03:24:15.227351  371990 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1219 03:24:15.227489  371990 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1219 03:24:15.227609  371990 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1219 03:24:15.227757  371990 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1219 03:24:16.232434  371990 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004794877s
	I1219 03:24:16.822339  371990 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.594795775s
	I1219 03:24:18.729241  371990 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501609989s
	I1219 03:24:18.747830  371990 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1219 03:24:18.757789  371990 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1219 03:24:18.768843  371990 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1219 03:24:18.769101  371990 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-837172 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1219 03:24:18.777248  371990 kubeadm.go:319] [bootstrap-token] Using token: tjh3gu.t27j0f9f7y1maup8
	I1219 03:24:18.778596  371990 out.go:252]   - Configuring RBAC rules ...
	I1219 03:24:18.778756  371990 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1219 03:24:18.782127  371990 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1219 03:24:18.788723  371990 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1219 03:24:18.791752  371990 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1219 03:24:18.794369  371990 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1219 03:24:18.796980  371990 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1219 03:24:19.135416  371990 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1219 03:24:19.551422  371990 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1219 03:24:20.135668  371990 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1219 03:24:20.136573  371990 kubeadm.go:319] 
	I1219 03:24:20.136667  371990 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1219 03:24:20.136677  371990 kubeadm.go:319] 
	I1219 03:24:20.136815  371990 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1219 03:24:20.136852  371990 kubeadm.go:319] 
	I1219 03:24:20.136883  371990 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1219 03:24:20.136970  371990 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1219 03:24:20.137020  371990 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1219 03:24:20.137026  371990 kubeadm.go:319] 
	I1219 03:24:20.137089  371990 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1219 03:24:20.137101  371990 kubeadm.go:319] 
	I1219 03:24:20.137171  371990 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1219 03:24:20.137179  371990 kubeadm.go:319] 
	I1219 03:24:20.137247  371990 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1219 03:24:20.137362  371990 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1219 03:24:20.137462  371990 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1219 03:24:20.137475  371990 kubeadm.go:319] 
	I1219 03:24:20.137594  371990 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1219 03:24:20.137725  371990 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1219 03:24:20.137741  371990 kubeadm.go:319] 
	I1219 03:24:20.137841  371990 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tjh3gu.t27j0f9f7y1maup8 \
	I1219 03:24:20.137977  371990 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e8b10e60f3db527579d1c34bb8f4e490eb8eee3e7862dee81a2c160635afa3a8 \
	I1219 03:24:20.138014  371990 kubeadm.go:319] 	--control-plane 
	I1219 03:24:20.138022  371990 kubeadm.go:319] 
	I1219 03:24:20.138116  371990 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1219 03:24:20.138124  371990 kubeadm.go:319] 
	I1219 03:24:20.138229  371990 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tjh3gu.t27j0f9f7y1maup8 \
	I1219 03:24:20.138367  371990 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e8b10e60f3db527579d1c34bb8f4e490eb8eee3e7862dee81a2c160635afa3a8 
	I1219 03:24:20.141307  371990 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1219 03:24:20.141417  371990 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1219 03:24:20.141469  371990 cni.go:84] Creating CNI manager for ""
	I1219 03:24:20.141490  371990 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:24:20.143537  371990 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1219 03:24:20.144502  371990 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1219 03:24:20.148822  371990 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl ...
	I1219 03:24:20.148843  371990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1219 03:24:20.161612  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1219 03:24:20.379173  371990 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:24:20.379262  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:20.379275  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-837172 minikube.k8s.io/updated_at=2025_12_19T03_24_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6 minikube.k8s.io/name=newest-cni-837172 minikube.k8s.io/primary=true
	I1219 03:24:20.388746  371990 ops.go:34] apiserver oom_adj: -16
	I1219 03:24:20.454762  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:20.955824  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:21.454834  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:21.954831  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:22.455563  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:22.955820  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:23.454808  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:23.955426  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:24.454807  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:24.521140  371990 kubeadm.go:1114] duration metric: took 4.141930442s to wait for elevateKubeSystemPrivileges
	I1219 03:24:24.521185  371990 kubeadm.go:403] duration metric: took 11.630460792s to StartCluster
	I1219 03:24:24.521209  371990 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:24.521280  371990 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:24:24.522690  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:24.522969  371990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1219 03:24:24.522985  371990 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:24:24.523053  371990 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:24:24.523152  371990 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-837172"
	I1219 03:24:24.523166  371990 addons.go:70] Setting default-storageclass=true in profile "newest-cni-837172"
	I1219 03:24:24.523191  371990 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-837172"
	I1219 03:24:24.523195  371990 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-837172"
	I1219 03:24:24.523231  371990 host.go:66] Checking if "newest-cni-837172" exists ...
	I1219 03:24:24.523251  371990 config.go:182] Loaded profile config "newest-cni-837172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:24.523588  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:24.523773  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:24.524387  371990 out.go:179] * Verifying Kubernetes components...
	I1219 03:24:24.525579  371990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:24.547572  371990 addons.go:239] Setting addon default-storageclass=true in "newest-cni-837172"
	I1219 03:24:24.547634  371990 host.go:66] Checking if "newest-cni-837172" exists ...
	I1219 03:24:24.547832  371990 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:24:24.548129  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:24.552104  371990 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:24:24.552127  371990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:24:24.552183  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:24.578893  371990 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:24:24.579252  371990 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:24:24.579323  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:24.583084  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:24.603726  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:24.615978  371990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1219 03:24:24.668369  371990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:24:24.704139  371990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:24:24.719590  371990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:24:24.803320  371990 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1219 03:24:24.805437  371990 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:24:24.805497  371990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:24:25.029229  371990 api_server.go:72] duration metric: took 506.215716ms to wait for apiserver process to appear ...
	I1219 03:24:25.029261  371990 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:24:25.029282  371990 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:24:25.034829  371990 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1219 03:24:25.035777  371990 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 03:24:25.035813  371990 api_server.go:131] duration metric: took 6.544499ms to wait for apiserver health ...
	I1219 03:24:25.035828  371990 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:24:25.038607  371990 system_pods.go:59] 8 kube-system pods found
	I1219 03:24:25.038639  371990 system_pods.go:61] "coredns-7d764666f9-ckc9j" [5bc3e758-2623-4eae-87fe-a58b932c9e87] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1219 03:24:25.038651  371990 system_pods.go:61] "etcd-newest-cni-837172" [59f28fae-3605-487b-a1b8-c3851c47abac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:24:25.038659  371990 system_pods.go:61] "kindnet-846n4" [b45c7fbd-085c-4972-b312-0973aab68ddc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:24:25.038670  371990 system_pods.go:61] "kube-apiserver-newest-cni-837172" [8d92900e-716d-42ad-9d88-1ca6d0ddf5c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:24:25.038678  371990 system_pods.go:61] "kube-controller-manager-newest-cni-837172" [46b3ad5a-64d1-4e1f-8bdf-ce613dcd6348] Running
	I1219 03:24:25.038684  371990 system_pods.go:61] "kube-proxy-6wg2n" [356cd689-df37-49ac-a3f2-1931978ccf64] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:24:25.038690  371990 system_pods.go:61] "kube-scheduler-newest-cni-837172" [da065d09-cc65-42e7-8e0d-9f9709cafaf9] Running
	I1219 03:24:25.038695  371990 system_pods.go:61] "storage-provisioner" [ba402c27-5828-489f-a656-bc0ef2e8f05e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1219 03:24:25.038713  371990 system_pods.go:74] duration metric: took 2.880877ms to wait for pod list to return data ...
	I1219 03:24:25.038720  371990 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:24:25.038969  371990 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1219 03:24:25.040226  371990 addons.go:546] duration metric: took 517.179033ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1219 03:24:25.040990  371990 default_sa.go:45] found service account: "default"
	I1219 03:24:25.041006  371990 default_sa.go:55] duration metric: took 2.27792ms for default service account to be created ...
	I1219 03:24:25.041015  371990 kubeadm.go:587] duration metric: took 518.007856ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:24:25.041030  371990 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:24:25.043438  371990 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:24:25.043465  371990 node_conditions.go:123] node cpu capacity is 8
	I1219 03:24:25.043494  371990 node_conditions.go:105] duration metric: took 2.45952ms to run NodePressure ...
	I1219 03:24:25.043503  371990 start.go:242] waiting for startup goroutines ...
	I1219 03:24:25.308179  371990 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-837172" context rescaled to 1 replicas
	I1219 03:24:25.308227  371990 start.go:247] waiting for cluster config update ...
	I1219 03:24:25.308241  371990 start.go:256] writing updated cluster config ...
	I1219 03:24:25.308502  371990 ssh_runner.go:195] Run: rm -f paused
	I1219 03:24:25.358553  371990 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1219 03:24:25.360429  371990 out.go:179] * Done! kubectl is now configured to use "newest-cni-837172" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 03:06:10 embed-certs-805185 crio[571]: time="2025-12-19T03:06:10.472463868Z" level=info msg="Created container d14c5a7b642f85cbc69aef96f7b439f2c6a873edd84fe53cafdbf19ba613e885: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf/clear-stale-pid" id=36313b84-f615-418e-a0c2-1800c7ad9bba name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:10 embed-certs-805185 crio[571]: time="2025-12-19T03:06:10.473232027Z" level=info msg="Starting container: d14c5a7b642f85cbc69aef96f7b439f2c6a873edd84fe53cafdbf19ba613e885" id=fa0d9c25-58bb-41e7-a751-32c0a5ee2072 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:10 embed-certs-805185 crio[571]: time="2025-12-19T03:06:10.475578796Z" level=info msg="Started container" PID=1981 containerID=d14c5a7b642f85cbc69aef96f7b439f2c6a873edd84fe53cafdbf19ba613e885 description=kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf/clear-stale-pid id=fa0d9c25-58bb-41e7-a751-32c0a5ee2072 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4243852a152c440419680bef0dfbf6f37d15c21f97f0c7059823f1520f9fc99c
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.135352218Z" level=info msg="Checking image status: kong:3.9" id=b06c69a2-5538-434a-8a72-4f2b223b8bfe name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.135542093Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.137747838Z" level=info msg="Checking image status: kong:3.9" id=9a4a1d08-b9e8-4169-83f7-aec209f5e0b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.13786748Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.142013294Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf/proxy" id=de23cef6-56d6-4c7e-a45c-c26d931492ea name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.142148287Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.148827695Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.149609559Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.189335726Z" level=info msg="Created container 20beadfa950bfa82589018450b7e9a01380f66f6a9eae32a9ce629a265cd5ad2: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf/proxy" id=de23cef6-56d6-4c7e-a45c-c26d931492ea name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.190165238Z" level=info msg="Starting container: 20beadfa950bfa82589018450b7e9a01380f66f6a9eae32a9ce629a265cd5ad2" id=b6a1bf8a-ce95-4b10-adea-c7af131524c4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.192808924Z" level=info msg="Started container" PID=1991 containerID=20beadfa950bfa82589018450b7e9a01380f66f6a9eae32a9ce629a265cd5ad2 description=kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf/proxy id=b6a1bf8a-ce95-4b10-adea-c7af131524c4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4243852a152c440419680bef0dfbf6f37d15c21f97f0c7059823f1520f9fc99c
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.183170694Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=084cd7a4-6ece-4c0a-8397-94465f3314df name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.184121665Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4d531b84-18eb-47e0-aad8-61f09bca340d name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.185241228Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=156d4e77-f8f4-4dbd-a7c4-e7b60cc39d38 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.18538707Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.189952355Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.190095237Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/77220e3b053f54d12f604ca801e81ba41d5ddbdc6900f8b55f5f4338438a1241/merged/etc/passwd: no such file or directory"
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.190117712Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/77220e3b053f54d12f604ca801e81ba41d5ddbdc6900f8b55f5f4338438a1241/merged/etc/group: no such file or directory"
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.190333672Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.231341429Z" level=info msg="Created container 3d7dd245b233f1e33bd4f191102b08c020335fd06eb801c42d94c75e88488904: kube-system/storage-provisioner/storage-provisioner" id=156d4e77-f8f4-4dbd-a7c4-e7b60cc39d38 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.232031749Z" level=info msg="Starting container: 3d7dd245b233f1e33bd4f191102b08c020335fd06eb801c42d94c75e88488904" id=26b83eef-8d30-46ec-89bd-a94e4e05ed3b name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.234124046Z" level=info msg="Started container" PID=3409 containerID=3d7dd245b233f1e33bd4f191102b08c020335fd06eb801c42d94c75e88488904 description=kube-system/storage-provisioner/storage-provisioner id=26b83eef-8d30-46ec-89bd-a94e4e05ed3b name=/runtime.v1.RuntimeService/StartContainer sandboxID=0c1876caf93065afdf67bc083a0b6fc921040c35760414f728f15ba554180160
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	3d7dd245b233f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Running             storage-provisioner                    1                   0c1876caf9306       storage-provisioner                                     kube-system
	20beadfa950bf       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                           18 minutes ago      Running             proxy                                  0                   4243852a152c4       kubernetes-dashboard-kong-9849c64bd-9p6zf               kubernetes-dashboard
	d14c5a7b642f8       docker.io/library/kong@sha256:73ac10ce4d2c5b3b8b4acd6c8117b4e72d1a201d95be2d51aeae8324d776a108                             18 minutes ago      Exited              clear-stale-pid                        0                   4243852a152c4       kubernetes-dashboard-kong-9849c64bd-9p6zf               kubernetes-dashboard
	a0449cd056863       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              18 minutes ago      Running             kubernetes-dashboard-auth              0                   db4923db488cf       kubernetes-dashboard-auth-658884f98f-455ns              kubernetes-dashboard
	95cc887c80866       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               18 minutes ago      Running             kubernetes-dashboard-web               0                   4037dc076fb10       kubernetes-dashboard-web-5c9f966b98-gfhnn               kubernetes-dashboard
	310b39bacccab       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   18 minutes ago      Running             kubernetes-dashboard-metrics-scraper   0                   0be0ce9f85847       kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr   kubernetes-dashboard
	5b4f781150596       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               18 minutes ago      Running             kubernetes-dashboard-api               0                   5af5195e34c00       kubernetes-dashboard-api-78bc857d5c-fljnp               kubernetes-dashboard
	37fd60f84cab5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                           18 minutes ago      Running             coredns                                0                   f0f30eba64edf       coredns-66bc5c9577-8gphx                                kube-system
	e8ff222bdb55d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                                           18 minutes ago      Running             busybox                                1                   523d107bc5d8f       busybox                                                 default
	3e6a9f16432bb       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                           18 minutes ago      Running             kube-proxy                             0                   4fb4de09d3b1c       kube-proxy-p8pqg                                        kube-system
	3df3cb7877110       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Exited              storage-provisioner                    0                   0c1876caf9306       storage-provisioner                                     kube-system
	9734264bc0316       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                                           18 minutes ago      Running             kindnet-cni                            0                   e566763b65b28       kindnet-jj9ms                                           kube-system
	dca8f84f406b7       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                           18 minutes ago      Running             kube-controller-manager                0                   1479078fc9c08       kube-controller-manager-embed-certs-805185              kube-system
	c0e9c22a25238       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                           18 minutes ago      Running             kube-scheduler                         0                   49e7ef6075ae3       kube-scheduler-embed-certs-805185                       kube-system
	e4f794af7924e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                           18 minutes ago      Running             etcd                                   0                   c8ef977665655       etcd-embed-certs-805185                                 kube-system
	fa9a88171fdc7       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                           18 minutes ago      Running             kube-apiserver                         0                   d92a0248993ee       kube-apiserver-embed-certs-805185                       kube-system
	
	
	==> coredns [37fd60f84cab5a40d06b06eda266df17eadd8d0a9ee56f7b235782087ec0083a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40097 - 29931 "HINFO IN 2735309851509519627.415811791505313667. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.415024708s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-805185
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-805185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=embed-certs-805185
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_04_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:04:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-805185
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:24:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:24:15 +0000   Fri, 19 Dec 2025 03:04:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:24:15 +0000   Fri, 19 Dec 2025 03:04:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:24:15 +0000   Fri, 19 Dec 2025 03:04:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:24:15 +0000   Fri, 19 Dec 2025 03:05:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-805185
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                e529c61b-35ad-4151-ab38-525026482d8c
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-8gphx                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-embed-certs-805185                                  100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-jj9ms                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-embed-certs-805185                        250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-embed-certs-805185               200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-p8pqg                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-embed-certs-805185                        100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        kubernetes-dashboard-api-78bc857d5c-fljnp                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-auth-658884f98f-455ns               100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-9p6zf                0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-gfhnn                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1250m (15%)  1100m (13%)
	  memory             1020Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node embed-certs-805185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node embed-certs-805185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m                kubelet          Node embed-certs-805185 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-805185 event: Registered Node embed-certs-805185 in Controller
	  Normal  NodeReady                19m                kubelet          Node embed-certs-805185 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node embed-certs-805185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node embed-certs-805185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node embed-certs-805185 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node embed-certs-805185 event: Registered Node embed-certs-805185 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [e4f794af7924e48700f3eb1f53c1070c15bc99d17539d5f097c1a7c62dded81f] <==
	{"level":"warn","ts":"2025-12-19T03:05:53.719221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.745613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.755575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.779584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.825911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.666523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.686420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.703183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.714636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.724682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.735837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.746037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.755589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.784157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.802436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.825473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58754","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T03:06:04.808381Z","caller":"traceutil/trace.go:172","msg":"trace[24513416] transaction","detail":"{read_only:false; response_revision:699; number_of_response:1; }","duration":"118.600036ms","start":"2025-12-19T03:06:04.689759Z","end":"2025-12-19T03:06:04.808359Z","steps":["trace[24513416] 'process raft request'  (duration: 118.551956ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:06:04.808596Z","caller":"traceutil/trace.go:172","msg":"trace[1604688651] transaction","detail":"{read_only:false; response_revision:698; number_of_response:1; }","duration":"178.640288ms","start":"2025-12-19T03:06:04.629933Z","end":"2025-12-19T03:06:04.808573Z","steps":["trace[1604688651] 'process raft request'  (duration: 128.977486ms)","trace[1604688651] 'compare'  (duration: 49.259539ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:06:10.029004Z","caller":"traceutil/trace.go:172","msg":"trace[1715983664] transaction","detail":"{read_only:false; response_revision:712; number_of_response:1; }","duration":"117.29944ms","start":"2025-12-19T03:06:09.911684Z","end":"2025-12-19T03:06:10.028983Z","steps":["trace[1715983664] 'process raft request'  (duration: 95.039156ms)","trace[1715983664] 'compare'  (duration: 21.881704ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:15:53.166470Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":959}
	{"level":"info","ts":"2025-12-19T03:15:53.173813Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":959,"took":"6.970165ms","hash":136659999,"current-db-size-bytes":3895296,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":3895296,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2025-12-19T03:15:53.173870Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":136659999,"revision":959,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:20:53.171463Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1202}
	{"level":"info","ts":"2025-12-19T03:20:53.173821Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1202,"took":"1.992974ms","hash":2951296099,"current-db-size-bytes":3895296,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":2015232,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-12-19T03:20:53.173858Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2951296099,"revision":1202,"compact-revision":959}
	
	
	==> kernel <==
	 03:24:40 up  1:07,  0 user,  load average: 1.93, 0.90, 1.23
	Linux embed-certs-805185 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9734264bc03165e973381a11181db3d0d85532eb608a1d648d545affcc0f5657] <==
	I1219 03:22:35.868429       1 main.go:301] handling current node
	I1219 03:22:45.867952       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:22:45.867995       1 main.go:301] handling current node
	I1219 03:22:55.871868       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:22:55.871903       1 main.go:301] handling current node
	I1219 03:23:05.872806       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:23:05.872843       1 main.go:301] handling current node
	I1219 03:23:15.868177       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:23:15.868210       1 main.go:301] handling current node
	I1219 03:23:25.867534       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:23:25.867573       1 main.go:301] handling current node
	I1219 03:23:35.867892       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:23:35.867944       1 main.go:301] handling current node
	I1219 03:23:45.874749       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:23:45.874784       1 main.go:301] handling current node
	I1219 03:23:55.871842       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:23:55.871874       1 main.go:301] handling current node
	I1219 03:24:05.867919       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:24:05.867959       1 main.go:301] handling current node
	I1219 03:24:15.868601       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:24:15.868645       1 main.go:301] handling current node
	I1219 03:24:25.868249       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:24:25.868398       1 main.go:301] handling current node
	I1219 03:24:35.867612       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:24:35.867672       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fa9a88171fdc75e01df96259a9096dab5e5ab76217553f36b6a9922f9e0f06fe] <==
	W1219 03:05:57.666179       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 03:05:57.686342       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.703087       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.714554       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.724651       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 03:05:57.735825       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.745925       1 logging.go:55] [core] [Channel #283 SubChannel #284]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.755549       1 logging.go:55] [core] [Channel #287 SubChannel #288]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.773268       1 logging.go:55] [core] [Channel #291 SubChannel #292]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.784117       1 logging.go:55] [core] [Channel #295 SubChannel #296]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1219 03:05:57.795282       1 controller.go:667] quota admission added evaluator for: endpoints
	W1219 03:05:57.802417       1 logging.go:55] [core] [Channel #299 SubChannel #300]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.819295       1 logging.go:55] [core] [Channel #303 SubChannel #304]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1219 03:05:57.894304       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:05:57.991073       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:05:58.143944       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 03:05:58.544436       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 03:05:58.579983       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:05:58.584890       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:05:58.595427       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.101.245.250"}
	I1219 03:05:58.600356       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.97.48.46"}
	I1219 03:05:58.604096       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.96.197.102"}
	I1219 03:05:58.610018       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.99.175"}
	I1219 03:05:58.616775       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.106.250.73"}
	I1219 03:15:54.401313       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [dca8f84f406b7acd8227404694ece4fd29d232591939f26e4325c52e7c00de60] <==
	I1219 03:05:57.736964       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1219 03:05:57.737011       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1219 03:05:57.737131       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1219 03:05:57.737248       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1219 03:05:57.737588       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 03:05:57.737617       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1219 03:05:57.738742       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 03:05:57.738773       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 03:05:57.738742       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1219 03:05:57.744005       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1219 03:05:57.744039       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1219 03:05:57.744147       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1219 03:05:57.744203       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1219 03:05:57.744212       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1219 03:05:57.744220       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1219 03:05:57.746255       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1219 03:05:57.747424       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1219 03:05:57.753898       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1219 03:05:57.755198       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 03:05:58.841753       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:05:58.868581       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:05:58.874821       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:05:58.881981       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:05:58.882003       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 03:05:58.882012       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [3e6a9f16432bb2d0f57c9e657b776eaae753f9a9bc474bcd825b022f2cf4726b] <==
	I1219 03:05:55.448309       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:05:55.528222       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:05:55.628850       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:05:55.628898       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1219 03:05:55.629015       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:05:55.649512       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:05:55.649574       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:05:55.655220       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:05:55.655665       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:05:55.655695       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:55.657141       1 config.go:200] "Starting service config controller"
	I1219 03:05:55.657618       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:05:55.657697       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:05:55.657751       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:05:55.658014       1 config.go:309] "Starting node config controller"
	I1219 03:05:55.658027       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:05:55.658041       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:05:55.658491       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:05:55.658532       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:05:55.757856       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:05:55.759651       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:05:55.759720       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c0e9c22a2523807e95fb727795c040c95c5bd029feb66a6a92f7087e4503774e] <==
	I1219 03:05:53.750115       1 serving.go:386] Generated self-signed cert in-memory
	I1219 03:05:54.696153       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:05:54.696180       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:54.700571       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:54.700567       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1219 03:05:54.700623       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:54.700627       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1219 03:05:54.700603       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:05:54.700660       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:05:54.701061       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:05:54.701240       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:05:54.801652       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:54.801670       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:05:54.801652       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785080     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-474hq\" (UniqueName: \"kubernetes.io/projected/c86f0bcd-5dae-4bb5-b1c2-bd9a3bea9060-kube-api-access-474hq\") pod \"kubernetes-dashboard-auth-658884f98f-455ns\" (UID: \"c86f0bcd-5dae-4bb5-b1c2-bd9a3bea9060\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-658884f98f-455ns"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785095     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ab309a53-9e4b-4a01-899a-797c7ba5208d-tmp-volume\") pod \"kubernetes-dashboard-api-78bc857d5c-fljnp\" (UID: \"ab309a53-9e4b-4a01-899a-797c7ba5208d\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-78bc857d5c-fljnp"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785116     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zzfm\" (UniqueName: \"kubernetes.io/projected/ab309a53-9e4b-4a01-899a-797c7ba5208d-kube-api-access-6zzfm\") pod \"kubernetes-dashboard-api-78bc857d5c-fljnp\" (UID: \"ab309a53-9e4b-4a01-899a-797c7ba5208d\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-78bc857d5c-fljnp"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785138     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f73d26a9-48d2-47fc-a241-1a7504297988-tmp-volume\") pod \"kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr\" (UID: \"f73d26a9-48d2-47fc-a241-1a7504297988\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785164     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7smc\" (UniqueName: \"kubernetes.io/projected/2c9c9b86-fd2a-4420-b98d-27dd078fe2c6-kube-api-access-k7smc\") pod \"kubernetes-dashboard-web-5c9f966b98-gfhnn\" (UID: \"2c9c9b86-fd2a-4420-b98d-27dd078fe2c6\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-gfhnn"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785222     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-tmp\" (UniqueName: \"kubernetes.io/empty-dir/30a45022-1901-4ea6-8857-08ff9a85c27a-kubernetes-dashboard-kong-tmp\") pod \"kubernetes-dashboard-kong-9849c64bd-9p6zf\" (UID: \"30a45022-1901-4ea6-8857-08ff9a85c27a\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf"
	Dec 19 03:05:59 embed-certs-805185 kubelet[737]: I1219 03:05:59.997824     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:59 embed-certs-805185 kubelet[737]: I1219 03:05:59.997922     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:00 embed-certs-805185 kubelet[737]: I1219 03:06:00.037195     737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 19 03:06:00 embed-certs-805185 kubelet[737]: I1219 03:06:00.097959     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-api-78bc857d5c-fljnp" podStartSLOduration=1.09098601 podStartE2EDuration="2.097935412s" podCreationTimestamp="2025-12-19 03:05:58 +0000 UTC" firstStartedPulling="2025-12-19 03:05:58.990618466 +0000 UTC m=+7.051227125" lastFinishedPulling="2025-12-19 03:05:59.997567856 +0000 UTC m=+8.058176527" observedRunningTime="2025-12-19 03:06:00.097689886 +0000 UTC m=+8.158298580" watchObservedRunningTime="2025-12-19 03:06:00.097935412 +0000 UTC m=+8.158544082"
	Dec 19 03:06:00 embed-certs-805185 kubelet[737]: I1219 03:06:00.934970     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:00 embed-certs-805185 kubelet[737]: I1219 03:06:00.936003     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:02 embed-certs-805185 kubelet[737]: I1219 03:06:02.793612     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr" podStartSLOduration=2.864491069 podStartE2EDuration="4.793587364s" podCreationTimestamp="2025-12-19 03:05:58 +0000 UTC" firstStartedPulling="2025-12-19 03:05:59.005628182 +0000 UTC m=+7.066236856" lastFinishedPulling="2025-12-19 03:06:00.934724484 +0000 UTC m=+8.995333151" observedRunningTime="2025-12-19 03:06:01.111916375 +0000 UTC m=+9.172525051" watchObservedRunningTime="2025-12-19 03:06:02.793587364 +0000 UTC m=+10.854196040"
	Dec 19 03:06:04 embed-certs-805185 kubelet[737]: I1219 03:06:04.028076     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:04 embed-certs-805185 kubelet[737]: I1219 03:06:04.028167     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:04 embed-certs-805185 kubelet[737]: I1219 03:06:04.121599     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-gfhnn" podStartSLOduration=1.100576683 podStartE2EDuration="6.121572519s" podCreationTimestamp="2025-12-19 03:05:58 +0000 UTC" firstStartedPulling="2025-12-19 03:05:59.006841332 +0000 UTC m=+7.067449988" lastFinishedPulling="2025-12-19 03:06:04.027837166 +0000 UTC m=+12.088445824" observedRunningTime="2025-12-19 03:06:04.121201067 +0000 UTC m=+12.181809743" watchObservedRunningTime="2025-12-19 03:06:04.121572519 +0000 UTC m=+12.182181195"
	Dec 19 03:06:05 embed-certs-805185 kubelet[737]: I1219 03:06:05.244202     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:05 embed-certs-805185 kubelet[737]: I1219 03:06:05.244300     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:06 embed-certs-805185 kubelet[737]: I1219 03:06:06.135487     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-auth-658884f98f-455ns" podStartSLOduration=1.904186191 podStartE2EDuration="8.135456486s" podCreationTimestamp="2025-12-19 03:05:58 +0000 UTC" firstStartedPulling="2025-12-19 03:05:59.012692427 +0000 UTC m=+7.073301081" lastFinishedPulling="2025-12-19 03:06:05.243962705 +0000 UTC m=+13.304571376" observedRunningTime="2025-12-19 03:06:06.134881051 +0000 UTC m=+14.195489728" watchObservedRunningTime="2025-12-19 03:06:06.135456486 +0000 UTC m=+14.196065161"
	Dec 19 03:06:12 embed-certs-805185 kubelet[737]: I1219 03:06:12.162006     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf" podStartSLOduration=2.749011678 podStartE2EDuration="14.161975971s" podCreationTimestamp="2025-12-19 03:05:58 +0000 UTC" firstStartedPulling="2025-12-19 03:05:59.023057738 +0000 UTC m=+7.083666406" lastFinishedPulling="2025-12-19 03:06:10.436022033 +0000 UTC m=+18.496630699" observedRunningTime="2025-12-19 03:06:12.161201474 +0000 UTC m=+20.221810169" watchObservedRunningTime="2025-12-19 03:06:12.161975971 +0000 UTC m=+20.222584647"
	Dec 19 03:06:26 embed-certs-805185 kubelet[737]: I1219 03:06:26.182763     737 scope.go:117] "RemoveContainer" containerID="3df3cb787711062528813854036926a57363b917a00c81ef68c3b8c0a675cfa2"
	Dec 19 03:24:37 embed-certs-805185 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 19 03:24:37 embed-certs-805185 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 19 03:24:37 embed-certs-805185 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 19 03:24:37 embed-certs-805185 systemd[1]: kubelet.service: Consumed 25.357s CPU time.
	
	
	==> kubernetes-dashboard [310b39bacccabe01a7800d05d30675f93096703212a17f66095da8c1865d22d2] <==
	E1219 03:22:01.082390       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:23:01.082525       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:24:01.082114       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	10.244.0.1 - - [19/Dec/2025:03:22:00 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:06 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:30 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:46 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:56 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:00 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:06 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:30 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:46 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:56 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:00 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:24:06 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:30 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:24:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	
	
	==> kubernetes-dashboard [5b4f7811505964d9e14b039acff4c61a760a6112e63bfff6242995499ee3b049] <==
	I1219 03:06:00.157650       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:06:00.157768       1 init.go:49] Using in-cluster config
	I1219 03:06:00.158043       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:06:00.158057       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:06:00.158064       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:06:00.158072       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:06:00.164066       1 main.go:119] "Successful initial request to the apiserver" version="v1.34.3"
	I1219 03:06:00.164098       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:06:00.190400       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 03:06:00.190937       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	I1219 03:06:30.196244       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [95cc887c80866d0ea33ef79f7654625e51e2590ee08a32fae89a8d46347f529a] <==
	I1219 03:06:04.155476       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:06:04.155552       1 init.go:48] Using in-cluster config
	I1219 03:06:04.155767       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [a0449cd05686367a0a816405c686858df4a264fbcacf43407705baff34ccbc5a] <==
	I1219 03:06:05.338222       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:06:05.338287       1 init.go:49] Using in-cluster config
	I1219 03:06:05.338471       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [3d7dd245b233f1e33bd4f191102b08c020335fd06eb801c42d94c75e88488904] <==
	W1219 03:24:15.884269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:17.887577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:17.891794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:19.895638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:19.899375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:21.903213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:21.907243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:23.910143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:23.914640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:25.918600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:25.924444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:27.928290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:27.932914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:29.935848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:29.941274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:31.944766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:31.948619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:33.952001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:33.956116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:35.959480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:35.963533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:37.967596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:37.971935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:39.976025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:39.980918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [3df3cb787711062528813854036926a57363b917a00c81ef68c3b8c0a675cfa2] <==
	I1219 03:05:55.403581       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:06:25.407035       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-805185 -n embed-certs-805185
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-805185 -n embed-certs-805185: exit status 2 (324.792136ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-805185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-805185
helpers_test.go:244: (dbg) docker inspect embed-certs-805185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415",
	        "Created": "2025-12-19T03:04:41.634228453Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 350292,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:05:45.883197161Z",
	            "FinishedAt": "2025-12-19T03:05:44.649106592Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415/hostname",
	        "HostsPath": "/var/lib/docker/containers/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415/hosts",
	        "LogPath": "/var/lib/docker/containers/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415/c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415-json.log",
	        "Name": "/embed-certs-805185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-805185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-805185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c2b5f77a65ce95164c29522c568338dd4bd9af5ca9e65fb44d824bf38f171415",
	                "LowerDir": "/var/lib/docker/overlay2/ed796c6e9d2f9edb740649c569172aeee5d6bffb367753798bf2544f5c8616e4-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ed796c6e9d2f9edb740649c569172aeee5d6bffb367753798bf2544f5c8616e4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ed796c6e9d2f9edb740649c569172aeee5d6bffb367753798bf2544f5c8616e4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ed796c6e9d2f9edb740649c569172aeee5d6bffb367753798bf2544f5c8616e4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-805185",
	                "Source": "/var/lib/docker/volumes/embed-certs-805185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-805185",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-805185",
	                "name.minikube.sigs.k8s.io": "embed-certs-805185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7457f8142accad01c6ab136b22c6fa80ee06dd20e79f2a84f99ffb94723b6308",
	            "SandboxKey": "/var/run/docker/netns/7457f8142acc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-805185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "67670b4143fc2c858529db8e9ece90091b3a7a00c5465943bbbbea83d055a550",
	                    "EndpointID": "a46e3becc7625d5ecd97a1cbfefeda9844ff31ce4ce29ae0c0d5c0cbe2af09be",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "d6:26:96:9c:9e:16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-805185",
	                        "c2b5f77a65ce"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-805185 -n embed-certs-805185
E1219 03:24:41.259545    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-805185 -n embed-certs-805185: exit status 2 (321.795236ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-805185 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-805185 logs -n 25: (1.27902099s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable dashboard -p old-k8s-version-433330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:04 UTC │
	│ start   │ -p old-k8s-version-433330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:04 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p no-preload-278042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p no-preload-278042 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p embed-certs-805185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p embed-certs-805185 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-717222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-717222 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-805185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-717222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ image   │ old-k8s-version-433330 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p old-k8s-version-433330 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ image   │ no-preload-278042 image list --format=json                                                                                                                                                                                                         │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p no-preload-278042 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ delete  │ -p old-k8s-version-433330                                                                                                                                                                                                                          │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p old-k8s-version-433330                                                                                                                                                                                                                          │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ start   │ -p newest-cni-837172 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p no-preload-278042                                                                                                                                                                                                                               │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p no-preload-278042                                                                                                                                                                                                                               │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ addons  │ enable metrics-server -p newest-cni-837172 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	│ stop    │ -p newest-cni-837172 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	│ image   │ embed-certs-805185 image list --format=json                                                                                                                                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ pause   │ -p embed-certs-805185 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:24:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:24:01.036023  371990 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:24:01.036565  371990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:01.036582  371990 out.go:374] Setting ErrFile to fd 2...
	I1219 03:24:01.036589  371990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:01.037114  371990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:24:01.038234  371990 out.go:368] Setting JSON to false
	I1219 03:24:01.039510  371990 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3992,"bootTime":1766110649,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:24:01.039592  371990 start.go:143] virtualization: kvm guest
	I1219 03:24:01.041656  371990 out.go:179] * [newest-cni-837172] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:24:01.043211  371990 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:24:01.043253  371990 notify.go:221] Checking for updates...
	I1219 03:24:01.045604  371990 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:24:01.046873  371990 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:24:01.047985  371990 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:24:01.052214  371990 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:24:01.053413  371990 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:24:01.055079  371990 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:24:01.055198  371990 config.go:182] Loaded profile config "embed-certs-805185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:24:01.055324  371990 config.go:182] Loaded profile config "no-preload-278042": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:01.055430  371990 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:24:01.080518  371990 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:24:01.080672  371990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:01.143010  371990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-19 03:24:01.132535066 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:01.143105  371990 docker.go:319] overlay module found
	I1219 03:24:01.144954  371990 out.go:179] * Using the docker driver based on user configuration
	I1219 03:24:01.146278  371990 start.go:309] selected driver: docker
	I1219 03:24:01.146299  371990 start.go:928] validating driver "docker" against <nil>
	I1219 03:24:01.146315  371990 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:24:01.147198  371990 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:01.207023  371990 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-19 03:24:01.196664778 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:01.207180  371990 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1219 03:24:01.207207  371990 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1219 03:24:01.207525  371990 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:24:01.209632  371990 out.go:179] * Using Docker driver with root privileges
	I1219 03:24:01.210891  371990 cni.go:84] Creating CNI manager for ""
	I1219 03:24:01.210974  371990 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:24:01.210985  371990 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1219 03:24:01.211049  371990 start.go:353] cluster config:
	{Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:01.212320  371990 out.go:179] * Starting "newest-cni-837172" primary control-plane node in "newest-cni-837172" cluster
	I1219 03:24:01.213422  371990 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:24:01.214779  371990 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:24:01.215953  371990 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:24:01.216006  371990 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1219 03:24:01.216025  371990 cache.go:65] Caching tarball of preloaded images
	I1219 03:24:01.216047  371990 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:24:01.216120  371990 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:24:01.216133  371990 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1219 03:24:01.216218  371990 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/config.json ...
	I1219 03:24:01.216239  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/config.json: {Name:mkf2bb7657c731e279d378a607e1a523b320a47e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:01.237349  371990 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:24:01.237368  371990 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:24:01.237386  371990 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:24:01.237420  371990 start.go:360] acquireMachinesLock for newest-cni-837172: {Name:mk7db0c3e7d0fad2e8a414af15e136d817446719 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:24:01.237512  371990 start.go:364] duration metric: took 75.602µs to acquireMachinesLock for "newest-cni-837172"
	I1219 03:24:01.237534  371990 start.go:93] Provisioning new machine with config: &{Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:24:01.237590  371990 start.go:125] createHost starting for "" (driver="docker")
	I1219 03:24:01.239751  371990 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1219 03:24:01.239974  371990 start.go:159] libmachine.API.Create for "newest-cni-837172" (driver="docker")
	I1219 03:24:01.240017  371990 client.go:173] LocalClient.Create starting
	I1219 03:24:01.240087  371990 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem
	I1219 03:24:01.240117  371990 main.go:144] libmachine: Decoding PEM data...
	I1219 03:24:01.240136  371990 main.go:144] libmachine: Parsing certificate...
	I1219 03:24:01.240185  371990 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem
	I1219 03:24:01.240204  371990 main.go:144] libmachine: Decoding PEM data...
	I1219 03:24:01.240213  371990 main.go:144] libmachine: Parsing certificate...
	I1219 03:24:01.240512  371990 cli_runner.go:164] Run: docker network inspect newest-cni-837172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1219 03:24:01.257883  371990 cli_runner.go:211] docker network inspect newest-cni-837172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1219 03:24:01.258008  371990 network_create.go:284] running [docker network inspect newest-cni-837172] to gather additional debugging logs...
	I1219 03:24:01.258034  371990 cli_runner.go:164] Run: docker network inspect newest-cni-837172
	W1219 03:24:01.275377  371990 cli_runner.go:211] docker network inspect newest-cni-837172 returned with exit code 1
	I1219 03:24:01.275412  371990 network_create.go:287] error running [docker network inspect newest-cni-837172]: docker network inspect newest-cni-837172: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-837172 not found
	I1219 03:24:01.275429  371990 network_create.go:289] output of [docker network inspect newest-cni-837172]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-837172 not found
	
	** /stderr **
	I1219 03:24:01.275535  371990 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:24:01.294388  371990 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d70e62b79a31 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:cf:22:72:cb:a0} reservation:<nil>}
	I1219 03:24:01.295272  371990 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-980aea652065 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ba:dd:9c:97:fb:7d} reservation:<nil>}
	I1219 03:24:01.296258  371990 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-42b42f6a5044 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:a2:1e:31:1b:21:84} reservation:<nil>}
	I1219 03:24:01.297569  371990 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec48c0}
	I1219 03:24:01.297599  371990 network_create.go:124] attempt to create docker network newest-cni-837172 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1219 03:24:01.297651  371990 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-837172 newest-cni-837172
	I1219 03:24:01.350655  371990 network_create.go:108] docker network newest-cni-837172 192.168.76.0/24 created
	I1219 03:24:01.350682  371990 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-837172" container
	I1219 03:24:01.350794  371990 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1219 03:24:01.370331  371990 cli_runner.go:164] Run: docker volume create newest-cni-837172 --label name.minikube.sigs.k8s.io=newest-cni-837172 --label created_by.minikube.sigs.k8s.io=true
	I1219 03:24:01.391519  371990 oci.go:103] Successfully created a docker volume newest-cni-837172
	I1219 03:24:01.391624  371990 cli_runner.go:164] Run: docker run --rm --name newest-cni-837172-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-837172 --entrypoint /usr/bin/test -v newest-cni-837172:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1219 03:24:01.840345  371990 oci.go:107] Successfully prepared a docker volume newest-cni-837172
	I1219 03:24:01.840449  371990 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:24:01.840465  371990 kic.go:194] Starting extracting preloaded images to volume ...
	I1219 03:24:01.840529  371990 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-837172:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1219 03:24:05.697885  371990 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-837172:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.857303195s)
	I1219 03:24:05.697924  371990 kic.go:203] duration metric: took 3.857455339s to extract preloaded images to volume ...
	W1219 03:24:05.698024  371990 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1219 03:24:05.698058  371990 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1219 03:24:05.698100  371990 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1219 03:24:05.757547  371990 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-837172 --name newest-cni-837172 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-837172 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-837172 --network newest-cni-837172 --ip 192.168.76.2 --volume newest-cni-837172:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1219 03:24:06.051568  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Running}}
	I1219 03:24:06.072261  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:06.093313  371990 cli_runner.go:164] Run: docker exec newest-cni-837172 stat /var/lib/dpkg/alternatives/iptables
	I1219 03:24:06.144238  371990 oci.go:144] the created container "newest-cni-837172" has a running status.
	I1219 03:24:06.144278  371990 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa...
	I1219 03:24:06.230796  371990 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1219 03:24:06.256299  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:06.273734  371990 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1219 03:24:06.273758  371990 kic_runner.go:114] Args: [docker exec --privileged newest-cni-837172 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1219 03:24:06.341522  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:06.363532  371990 machine.go:94] provisionDockerMachine start ...
	I1219 03:24:06.363655  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:06.390168  371990 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:06.390536  371990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1219 03:24:06.390552  371990 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:24:06.391620  371990 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34054->127.0.0.1:33138: read: connection reset by peer
	I1219 03:24:09.536680  371990 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-837172
	
	I1219 03:24:09.536733  371990 ubuntu.go:182] provisioning hostname "newest-cni-837172"
	I1219 03:24:09.536797  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:09.555045  371990 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:09.555325  371990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1219 03:24:09.555340  371990 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-837172 && echo "newest-cni-837172" | sudo tee /etc/hostname
	I1219 03:24:09.709116  371990 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-837172
	
	I1219 03:24:09.709183  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:09.727847  371990 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:09.728289  371990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1219 03:24:09.728322  371990 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-837172' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-837172/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-837172' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:24:09.871486  371990 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:24:09.871529  371990 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 03:24:09.871588  371990 ubuntu.go:190] setting up certificates
	I1219 03:24:09.871600  371990 provision.go:84] configureAuth start
	I1219 03:24:09.871666  371990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-837172
	I1219 03:24:09.890551  371990 provision.go:143] copyHostCerts
	I1219 03:24:09.890608  371990 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem, removing ...
	I1219 03:24:09.890616  371990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem
	I1219 03:24:09.890710  371990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 03:24:09.890819  371990 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem, removing ...
	I1219 03:24:09.890829  371990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem
	I1219 03:24:09.890867  371990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 03:24:09.890920  371990 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem, removing ...
	I1219 03:24:09.890933  371990 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem
	I1219 03:24:09.890959  371990 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 03:24:09.891015  371990 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.newest-cni-837172 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-837172]
	I1219 03:24:09.923962  371990 provision.go:177] copyRemoteCerts
	I1219 03:24:09.924021  371990 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:24:09.924055  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:09.943177  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.046012  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:24:10.066001  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1219 03:24:10.083456  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 03:24:10.101464  371990 provision.go:87] duration metric: took 229.847544ms to configureAuth
	I1219 03:24:10.101492  371990 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:24:10.101673  371990 config.go:182] Loaded profile config "newest-cni-837172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:10.101801  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.120532  371990 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:10.120821  371990 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1219 03:24:10.120839  371990 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:24:10.410477  371990 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:24:10.410502  371990 machine.go:97] duration metric: took 4.046944113s to provisionDockerMachine
	I1219 03:24:10.410513  371990 client.go:176] duration metric: took 9.170488353s to LocalClient.Create
	I1219 03:24:10.410535  371990 start.go:167] duration metric: took 9.170561433s to libmachine.API.Create "newest-cni-837172"
	I1219 03:24:10.410546  371990 start.go:293] postStartSetup for "newest-cni-837172" (driver="docker")
	I1219 03:24:10.410559  371990 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:24:10.410613  371990 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:24:10.410664  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.430222  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.533641  371990 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:24:10.537745  371990 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:24:10.537783  371990 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:24:10.537806  371990 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 03:24:10.537857  371990 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 03:24:10.537934  371990 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem -> 85362.pem in /etc/ssl/certs
	I1219 03:24:10.538030  371990 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:24:10.545818  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:24:10.566832  371990 start.go:296] duration metric: took 156.272185ms for postStartSetup
	I1219 03:24:10.567244  371990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-837172
	I1219 03:24:10.586641  371990 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/config.json ...
	I1219 03:24:10.586934  371990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:24:10.586987  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.604894  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.703924  371990 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:24:10.708480  371990 start.go:128] duration metric: took 9.470874061s to createHost
	I1219 03:24:10.708519  371990 start.go:83] releasing machines lock for "newest-cni-837172", held for 9.47099552s
	I1219 03:24:10.708596  371990 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-837172
	I1219 03:24:10.727823  371990 ssh_runner.go:195] Run: cat /version.json
	I1219 03:24:10.727853  371990 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:24:10.727877  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.727922  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:10.748155  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.748577  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:10.899556  371990 ssh_runner.go:195] Run: systemctl --version
	I1219 03:24:10.906157  371990 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:24:10.942010  371990 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:24:10.946776  371990 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:24:10.946834  371990 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:24:10.972921  371990 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 03:24:10.972943  371990 start.go:496] detecting cgroup driver to use...
	I1219 03:24:10.972971  371990 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:24:10.973032  371990 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:24:10.989146  371990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:24:11.002203  371990 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:24:11.002282  371990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:24:11.018422  371990 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:24:11.035554  371990 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:24:11.119919  371990 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:24:11.207179  371990 docker.go:234] disabling docker service ...
	I1219 03:24:11.207252  371990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:24:11.225572  371990 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:24:11.237859  371990 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:24:11.323024  371990 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:24:11.407303  371990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:24:11.419524  371990 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:24:11.433341  371990 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:24:11.433395  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.443408  371990 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 03:24:11.443468  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.452460  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.460889  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.469451  371990 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:24:11.477277  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.485766  371990 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.499106  371990 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:11.508174  371990 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:24:11.515313  371990 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:24:11.522319  371990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:11.604796  371990 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:24:11.746317  371990 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:24:11.746376  371990 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:24:11.750220  371990 start.go:564] Will wait 60s for crictl version
	I1219 03:24:11.750278  371990 ssh_runner.go:195] Run: which crictl
	I1219 03:24:11.753821  371990 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:24:11.777608  371990 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 03:24:11.777714  371990 ssh_runner.go:195] Run: crio --version
	I1219 03:24:11.804073  371990 ssh_runner.go:195] Run: crio --version
	I1219 03:24:11.833640  371990 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1219 03:24:11.834886  371990 cli_runner.go:164] Run: docker network inspect newest-cni-837172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:24:11.852567  371990 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1219 03:24:11.856667  371990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:24:11.871316  371990 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1219 03:24:11.872497  371990 kubeadm.go:884] updating cluster {Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:24:11.872642  371990 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:24:11.872692  371990 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:24:11.904183  371990 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:24:11.904204  371990 crio.go:433] Images already preloaded, skipping extraction
	I1219 03:24:11.904263  371990 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:24:11.930999  371990 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:24:11.931020  371990 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:24:11.931026  371990 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1219 03:24:11.931148  371990 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-837172 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:24:11.931228  371990 ssh_runner.go:195] Run: crio config
	I1219 03:24:11.976472  371990 cni.go:84] Creating CNI manager for ""
	I1219 03:24:11.976491  371990 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:24:11.976503  371990 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1219 03:24:11.976531  371990 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-837172 NodeName:newest-cni-837172 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:24:11.976658  371990 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-837172"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:24:11.976739  371990 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1219 03:24:11.985021  371990 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:24:11.985080  371990 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:24:11.992859  371990 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1219 03:24:12.006496  371990 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1219 03:24:12.021643  371990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1219 03:24:12.034441  371990 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:24:12.038092  371990 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:24:12.047986  371990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:12.128789  371990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:24:12.152988  371990 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172 for IP: 192.168.76.2
	I1219 03:24:12.153016  371990 certs.go:195] generating shared ca certs ...
	I1219 03:24:12.153035  371990 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.153175  371990 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 03:24:12.153220  371990 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 03:24:12.153233  371990 certs.go:257] generating profile certs ...
	I1219 03:24:12.153289  371990 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.key
	I1219 03:24:12.153302  371990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.crt with IP's: []
	I1219 03:24:12.271406  371990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.crt ...
	I1219 03:24:12.271435  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.crt: {Name:mke8fed86df635a05f54420e92870363146991f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.271601  371990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.key ...
	I1219 03:24:12.271612  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.key: {Name:mk39737e3f76352137132fe8060ef391a0d43bc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.271690  371990 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b
	I1219 03:24:12.271717  371990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt.48a05c1b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1219 03:24:12.379475  371990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt.48a05c1b ...
	I1219 03:24:12.379503  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt.48a05c1b: {Name:mkc4d74c8f8c4deb077c8f688d203329a2c5750d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.379662  371990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b ...
	I1219 03:24:12.379675  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b: {Name:mk1b93ad6f4ca843c3104dc76975062dde81eaef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.379761  371990 certs.go:382] copying /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt.48a05c1b -> /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt
	I1219 03:24:12.379853  371990 certs.go:386] copying /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b -> /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key
	I1219 03:24:12.379918  371990 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key
	I1219 03:24:12.379940  371990 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt with IP's: []
	I1219 03:24:12.467338  371990 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt ...
	I1219 03:24:12.467368  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt: {Name:mk5dc8f653da407b5f14ca799301800eac0952c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.467561  371990 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key ...
	I1219 03:24:12.467581  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key: {Name:mk4063cc1af4dbf73c9c390b468c828c35385b5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:12.467821  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem (1338 bytes)
	W1219 03:24:12.467864  371990 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536_empty.pem, impossibly tiny 0 bytes
	I1219 03:24:12.467875  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:24:12.467901  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:24:12.467925  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:24:12.467953  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 03:24:12.468001  371990 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:24:12.468519  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:24:12.487159  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:24:12.504306  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:24:12.521550  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 03:24:12.538418  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1219 03:24:12.554861  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:24:12.572166  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:24:12.589324  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:24:12.606224  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:24:12.625269  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem --> /usr/share/ca-certificates/8536.pem (1338 bytes)
	I1219 03:24:12.642642  371990 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /usr/share/ca-certificates/85362.pem (1708 bytes)
	I1219 03:24:12.658965  371990 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:24:12.671458  371990 ssh_runner.go:195] Run: openssl version
	I1219 03:24:12.677537  371990 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:12.684496  371990 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:24:12.691660  371990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:12.695495  371990 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:12.695541  371990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:12.730806  371990 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:24:12.738920  371990 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 03:24:12.746295  371990 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8536.pem
	I1219 03:24:12.753462  371990 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8536.pem /etc/ssl/certs/8536.pem
	I1219 03:24:12.760758  371990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8536.pem
	I1219 03:24:12.764356  371990 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:33 /usr/share/ca-certificates/8536.pem
	I1219 03:24:12.764415  371990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8536.pem
	I1219 03:24:12.800484  371990 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:24:12.809192  371990 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8536.pem /etc/ssl/certs/51391683.0
	I1219 03:24:12.816759  371990 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85362.pem
	I1219 03:24:12.825274  371990 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85362.pem /etc/ssl/certs/85362.pem
	I1219 03:24:12.833125  371990 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85362.pem
	I1219 03:24:12.836939  371990 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:33 /usr/share/ca-certificates/85362.pem
	I1219 03:24:12.836993  371990 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85362.pem
	I1219 03:24:12.871891  371990 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:24:12.879672  371990 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/85362.pem /etc/ssl/certs/3ec20f2e.0
	I1219 03:24:12.887040  371990 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:24:12.890648  371990 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1219 03:24:12.890729  371990 kubeadm.go:401] StartCluster: {Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:12.890825  371990 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:24:12.890893  371990 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:24:12.920058  371990 cri.go:92] found id: ""
	I1219 03:24:12.920133  371990 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:24:12.928606  371990 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 03:24:12.936934  371990 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1219 03:24:12.936985  371990 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 03:24:12.945218  371990 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 03:24:12.945240  371990 kubeadm.go:158] found existing configuration files:
	
	I1219 03:24:12.945287  371990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1219 03:24:12.952614  371990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 03:24:12.952666  371990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 03:24:12.960262  371990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1219 03:24:12.967725  371990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 03:24:12.967831  371990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 03:24:12.975015  371990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1219 03:24:12.982506  371990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 03:24:12.982549  371990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 03:24:12.989686  371990 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1219 03:24:12.997834  371990 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 03:24:12.997888  371990 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 03:24:13.005263  371990 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1219 03:24:13.041610  371990 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1219 03:24:13.041730  371990 kubeadm.go:319] [preflight] Running pre-flight checks
	I1219 03:24:13.106822  371990 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1219 03:24:13.106921  371990 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1219 03:24:13.106982  371990 kubeadm.go:319] OS: Linux
	I1219 03:24:13.107046  371990 kubeadm.go:319] CGROUPS_CPU: enabled
	I1219 03:24:13.107146  371990 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1219 03:24:13.107237  371990 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1219 03:24:13.107288  371990 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1219 03:24:13.107344  371990 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1219 03:24:13.107385  371990 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1219 03:24:13.107463  371990 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1219 03:24:13.107538  371990 kubeadm.go:319] CGROUPS_IO: enabled
	I1219 03:24:13.164958  371990 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1219 03:24:13.165152  371990 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1219 03:24:13.165292  371990 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1219 03:24:13.174971  371990 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1219 03:24:13.178028  371990 out.go:252]   - Generating certificates and keys ...
	I1219 03:24:13.178136  371990 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1219 03:24:13.178232  371990 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1219 03:24:13.301903  371990 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1219 03:24:13.387971  371990 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1219 03:24:13.500057  371990 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1219 03:24:13.603458  371990 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1219 03:24:13.636925  371990 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1219 03:24:13.637122  371990 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-837172] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1219 03:24:13.836231  371990 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1219 03:24:13.836371  371990 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-837172] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1219 03:24:14.002346  371990 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1219 03:24:14.032095  371990 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1219 03:24:14.137234  371990 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1219 03:24:14.137362  371990 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1219 03:24:14.167788  371990 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1219 03:24:14.256296  371990 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1219 03:24:14.335846  371990 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1219 03:24:14.409462  371990 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1219 03:24:14.592839  371990 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1219 03:24:14.593412  371990 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1219 03:24:14.597164  371990 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1219 03:24:14.598823  371990 out.go:252]   - Booting up control plane ...
	I1219 03:24:14.598951  371990 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1219 03:24:14.599066  371990 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1219 03:24:14.599695  371990 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1219 03:24:14.613628  371990 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1219 03:24:14.613794  371990 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1219 03:24:14.621414  371990 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1219 03:24:14.621682  371990 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1219 03:24:14.621767  371990 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1219 03:24:14.720948  371990 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1219 03:24:14.721103  371990 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1219 03:24:15.222675  371990 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.8355ms
	I1219 03:24:15.227351  371990 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1219 03:24:15.227489  371990 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1219 03:24:15.227609  371990 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1219 03:24:15.227757  371990 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1219 03:24:16.232434  371990 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004794877s
	I1219 03:24:16.822339  371990 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.594795775s
	I1219 03:24:18.729241  371990 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501609989s
	I1219 03:24:18.747830  371990 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1219 03:24:18.757789  371990 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1219 03:24:18.768843  371990 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1219 03:24:18.769101  371990 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-837172 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1219 03:24:18.777248  371990 kubeadm.go:319] [bootstrap-token] Using token: tjh3gu.t27j0f9f7y1maup8
	I1219 03:24:18.778596  371990 out.go:252]   - Configuring RBAC rules ...
	I1219 03:24:18.778756  371990 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1219 03:24:18.782127  371990 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1219 03:24:18.788723  371990 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1219 03:24:18.791752  371990 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1219 03:24:18.794369  371990 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1219 03:24:18.796980  371990 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1219 03:24:19.135416  371990 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1219 03:24:19.551422  371990 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1219 03:24:20.135668  371990 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1219 03:24:20.136573  371990 kubeadm.go:319] 
	I1219 03:24:20.136667  371990 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1219 03:24:20.136677  371990 kubeadm.go:319] 
	I1219 03:24:20.136815  371990 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1219 03:24:20.136852  371990 kubeadm.go:319] 
	I1219 03:24:20.136883  371990 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1219 03:24:20.136970  371990 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1219 03:24:20.137020  371990 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1219 03:24:20.137026  371990 kubeadm.go:319] 
	I1219 03:24:20.137089  371990 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1219 03:24:20.137101  371990 kubeadm.go:319] 
	I1219 03:24:20.137171  371990 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1219 03:24:20.137179  371990 kubeadm.go:319] 
	I1219 03:24:20.137247  371990 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1219 03:24:20.137362  371990 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1219 03:24:20.137462  371990 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1219 03:24:20.137475  371990 kubeadm.go:319] 
	I1219 03:24:20.137594  371990 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1219 03:24:20.137725  371990 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1219 03:24:20.137741  371990 kubeadm.go:319] 
	I1219 03:24:20.137841  371990 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token tjh3gu.t27j0f9f7y1maup8 \
	I1219 03:24:20.137977  371990 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e8b10e60f3db527579d1c34bb8f4e490eb8eee3e7862dee81a2c160635afa3a8 \
	I1219 03:24:20.138014  371990 kubeadm.go:319] 	--control-plane 
	I1219 03:24:20.138022  371990 kubeadm.go:319] 
	I1219 03:24:20.138116  371990 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1219 03:24:20.138124  371990 kubeadm.go:319] 
	I1219 03:24:20.138229  371990 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token tjh3gu.t27j0f9f7y1maup8 \
	I1219 03:24:20.138367  371990 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:e8b10e60f3db527579d1c34bb8f4e490eb8eee3e7862dee81a2c160635afa3a8 
	I1219 03:24:20.141307  371990 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1219 03:24:20.141417  371990 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1219 03:24:20.141469  371990 cni.go:84] Creating CNI manager for ""
	I1219 03:24:20.141490  371990 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:24:20.143537  371990 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1219 03:24:20.144502  371990 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1219 03:24:20.148822  371990 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl ...
	I1219 03:24:20.148843  371990 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1219 03:24:20.161612  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1219 03:24:20.379173  371990 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:24:20.379262  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:20.379275  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-837172 minikube.k8s.io/updated_at=2025_12_19T03_24_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6 minikube.k8s.io/name=newest-cni-837172 minikube.k8s.io/primary=true
	I1219 03:24:20.388746  371990 ops.go:34] apiserver oom_adj: -16
	I1219 03:24:20.454762  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:20.955824  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:21.454834  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:21.954831  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:22.455563  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:22.955820  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:23.454808  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:23.955426  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:24.454807  371990 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:24:24.521140  371990 kubeadm.go:1114] duration metric: took 4.141930442s to wait for elevateKubeSystemPrivileges
	I1219 03:24:24.521185  371990 kubeadm.go:403] duration metric: took 11.630460792s to StartCluster
	I1219 03:24:24.521209  371990 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:24.521280  371990 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:24:24.522690  371990 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:24.522969  371990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1219 03:24:24.522985  371990 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:24:24.523053  371990 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:24:24.523152  371990 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-837172"
	I1219 03:24:24.523166  371990 addons.go:70] Setting default-storageclass=true in profile "newest-cni-837172"
	I1219 03:24:24.523191  371990 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-837172"
	I1219 03:24:24.523195  371990 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-837172"
	I1219 03:24:24.523231  371990 host.go:66] Checking if "newest-cni-837172" exists ...
	I1219 03:24:24.523251  371990 config.go:182] Loaded profile config "newest-cni-837172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:24.523588  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:24.523773  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:24.524387  371990 out.go:179] * Verifying Kubernetes components...
	I1219 03:24:24.525579  371990 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:24.547572  371990 addons.go:239] Setting addon default-storageclass=true in "newest-cni-837172"
	I1219 03:24:24.547634  371990 host.go:66] Checking if "newest-cni-837172" exists ...
	I1219 03:24:24.547832  371990 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:24:24.548129  371990 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:24.552104  371990 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:24:24.552127  371990 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:24:24.552183  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:24.578893  371990 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:24:24.579252  371990 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:24:24.579323  371990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:24.583084  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:24.603726  371990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:24.615978  371990 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1219 03:24:24.668369  371990 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:24:24.704139  371990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:24:24.719590  371990 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:24:24.803320  371990 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1219 03:24:24.805437  371990 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:24:24.805497  371990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:24:25.029229  371990 api_server.go:72] duration metric: took 506.215716ms to wait for apiserver process to appear ...
	I1219 03:24:25.029261  371990 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:24:25.029282  371990 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:24:25.034829  371990 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1219 03:24:25.035777  371990 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 03:24:25.035813  371990 api_server.go:131] duration metric: took 6.544499ms to wait for apiserver health ...
	I1219 03:24:25.035828  371990 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:24:25.038607  371990 system_pods.go:59] 8 kube-system pods found
	I1219 03:24:25.038639  371990 system_pods.go:61] "coredns-7d764666f9-ckc9j" [5bc3e758-2623-4eae-87fe-a58b932c9e87] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1219 03:24:25.038651  371990 system_pods.go:61] "etcd-newest-cni-837172" [59f28fae-3605-487b-a1b8-c3851c47abac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:24:25.038659  371990 system_pods.go:61] "kindnet-846n4" [b45c7fbd-085c-4972-b312-0973aab68ddc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:24:25.038670  371990 system_pods.go:61] "kube-apiserver-newest-cni-837172" [8d92900e-716d-42ad-9d88-1ca6d0ddf5c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:24:25.038678  371990 system_pods.go:61] "kube-controller-manager-newest-cni-837172" [46b3ad5a-64d1-4e1f-8bdf-ce613dcd6348] Running
	I1219 03:24:25.038684  371990 system_pods.go:61] "kube-proxy-6wg2n" [356cd689-df37-49ac-a3f2-1931978ccf64] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:24:25.038690  371990 system_pods.go:61] "kube-scheduler-newest-cni-837172" [da065d09-cc65-42e7-8e0d-9f9709cafaf9] Running
	I1219 03:24:25.038695  371990 system_pods.go:61] "storage-provisioner" [ba402c27-5828-489f-a656-bc0ef2e8f05e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1219 03:24:25.038713  371990 system_pods.go:74] duration metric: took 2.880877ms to wait for pod list to return data ...
	I1219 03:24:25.038720  371990 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:24:25.038969  371990 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1219 03:24:25.040226  371990 addons.go:546] duration metric: took 517.179033ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1219 03:24:25.040990  371990 default_sa.go:45] found service account: "default"
	I1219 03:24:25.041006  371990 default_sa.go:55] duration metric: took 2.27792ms for default service account to be created ...
	I1219 03:24:25.041015  371990 kubeadm.go:587] duration metric: took 518.007856ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:24:25.041030  371990 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:24:25.043438  371990 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:24:25.043465  371990 node_conditions.go:123] node cpu capacity is 8
	I1219 03:24:25.043494  371990 node_conditions.go:105] duration metric: took 2.45952ms to run NodePressure ...
	I1219 03:24:25.043503  371990 start.go:242] waiting for startup goroutines ...
	I1219 03:24:25.308179  371990 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-837172" context rescaled to 1 replicas
	I1219 03:24:25.308227  371990 start.go:247] waiting for cluster config update ...
	I1219 03:24:25.308241  371990 start.go:256] writing updated cluster config ...
	I1219 03:24:25.308502  371990 ssh_runner.go:195] Run: rm -f paused
	I1219 03:24:25.358553  371990 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1219 03:24:25.360429  371990 out.go:179] * Done! kubectl is now configured to use "newest-cni-837172" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 03:06:10 embed-certs-805185 crio[571]: time="2025-12-19T03:06:10.472463868Z" level=info msg="Created container d14c5a7b642f85cbc69aef96f7b439f2c6a873edd84fe53cafdbf19ba613e885: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf/clear-stale-pid" id=36313b84-f615-418e-a0c2-1800c7ad9bba name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:10 embed-certs-805185 crio[571]: time="2025-12-19T03:06:10.473232027Z" level=info msg="Starting container: d14c5a7b642f85cbc69aef96f7b439f2c6a873edd84fe53cafdbf19ba613e885" id=fa0d9c25-58bb-41e7-a751-32c0a5ee2072 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:10 embed-certs-805185 crio[571]: time="2025-12-19T03:06:10.475578796Z" level=info msg="Started container" PID=1981 containerID=d14c5a7b642f85cbc69aef96f7b439f2c6a873edd84fe53cafdbf19ba613e885 description=kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf/clear-stale-pid id=fa0d9c25-58bb-41e7-a751-32c0a5ee2072 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4243852a152c440419680bef0dfbf6f37d15c21f97f0c7059823f1520f9fc99c
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.135352218Z" level=info msg="Checking image status: kong:3.9" id=b06c69a2-5538-434a-8a72-4f2b223b8bfe name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.135542093Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.137747838Z" level=info msg="Checking image status: kong:3.9" id=9a4a1d08-b9e8-4169-83f7-aec209f5e0b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.13786748Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.142013294Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf/proxy" id=de23cef6-56d6-4c7e-a45c-c26d931492ea name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.142148287Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.148827695Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.149609559Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.189335726Z" level=info msg="Created container 20beadfa950bfa82589018450b7e9a01380f66f6a9eae32a9ce629a265cd5ad2: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf/proxy" id=de23cef6-56d6-4c7e-a45c-c26d931492ea name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.190165238Z" level=info msg="Starting container: 20beadfa950bfa82589018450b7e9a01380f66f6a9eae32a9ce629a265cd5ad2" id=b6a1bf8a-ce95-4b10-adea-c7af131524c4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:11 embed-certs-805185 crio[571]: time="2025-12-19T03:06:11.192808924Z" level=info msg="Started container" PID=1991 containerID=20beadfa950bfa82589018450b7e9a01380f66f6a9eae32a9ce629a265cd5ad2 description=kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf/proxy id=b6a1bf8a-ce95-4b10-adea-c7af131524c4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4243852a152c440419680bef0dfbf6f37d15c21f97f0c7059823f1520f9fc99c
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.183170694Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=084cd7a4-6ece-4c0a-8397-94465f3314df name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.184121665Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4d531b84-18eb-47e0-aad8-61f09bca340d name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.185241228Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=156d4e77-f8f4-4dbd-a7c4-e7b60cc39d38 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.18538707Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.189952355Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.190095237Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/77220e3b053f54d12f604ca801e81ba41d5ddbdc6900f8b55f5f4338438a1241/merged/etc/passwd: no such file or directory"
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.190117712Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/77220e3b053f54d12f604ca801e81ba41d5ddbdc6900f8b55f5f4338438a1241/merged/etc/group: no such file or directory"
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.190333672Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.231341429Z" level=info msg="Created container 3d7dd245b233f1e33bd4f191102b08c020335fd06eb801c42d94c75e88488904: kube-system/storage-provisioner/storage-provisioner" id=156d4e77-f8f4-4dbd-a7c4-e7b60cc39d38 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.232031749Z" level=info msg="Starting container: 3d7dd245b233f1e33bd4f191102b08c020335fd06eb801c42d94c75e88488904" id=26b83eef-8d30-46ec-89bd-a94e4e05ed3b name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:26 embed-certs-805185 crio[571]: time="2025-12-19T03:06:26.234124046Z" level=info msg="Started container" PID=3409 containerID=3d7dd245b233f1e33bd4f191102b08c020335fd06eb801c42d94c75e88488904 description=kube-system/storage-provisioner/storage-provisioner id=26b83eef-8d30-46ec-89bd-a94e4e05ed3b name=/runtime.v1.RuntimeService/StartContainer sandboxID=0c1876caf93065afdf67bc083a0b6fc921040c35760414f728f15ba554180160
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	3d7dd245b233f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Running             storage-provisioner                    1                   0c1876caf9306       storage-provisioner                                     kube-system
	20beadfa950bf       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                           18 minutes ago      Running             proxy                                  0                   4243852a152c4       kubernetes-dashboard-kong-9849c64bd-9p6zf               kubernetes-dashboard
	d14c5a7b642f8       docker.io/library/kong@sha256:73ac10ce4d2c5b3b8b4acd6c8117b4e72d1a201d95be2d51aeae8324d776a108                             18 minutes ago      Exited              clear-stale-pid                        0                   4243852a152c4       kubernetes-dashboard-kong-9849c64bd-9p6zf               kubernetes-dashboard
	a0449cd056863       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              18 minutes ago      Running             kubernetes-dashboard-auth              0                   db4923db488cf       kubernetes-dashboard-auth-658884f98f-455ns              kubernetes-dashboard
	95cc887c80866       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               18 minutes ago      Running             kubernetes-dashboard-web               0                   4037dc076fb10       kubernetes-dashboard-web-5c9f966b98-gfhnn               kubernetes-dashboard
	310b39bacccab       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   18 minutes ago      Running             kubernetes-dashboard-metrics-scraper   0                   0be0ce9f85847       kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr   kubernetes-dashboard
	5b4f781150596       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               18 minutes ago      Running             kubernetes-dashboard-api               0                   5af5195e34c00       kubernetes-dashboard-api-78bc857d5c-fljnp               kubernetes-dashboard
	37fd60f84cab5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                           18 minutes ago      Running             coredns                                0                   f0f30eba64edf       coredns-66bc5c9577-8gphx                                kube-system
	e8ff222bdb55d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                                           18 minutes ago      Running             busybox                                1                   523d107bc5d8f       busybox                                                 default
	3e6a9f16432bb       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                           18 minutes ago      Running             kube-proxy                             0                   4fb4de09d3b1c       kube-proxy-p8pqg                                        kube-system
	3df3cb7877110       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Exited              storage-provisioner                    0                   0c1876caf9306       storage-provisioner                                     kube-system
	9734264bc0316       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                                           18 minutes ago      Running             kindnet-cni                            0                   e566763b65b28       kindnet-jj9ms                                           kube-system
	dca8f84f406b7       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                           18 minutes ago      Running             kube-controller-manager                0                   1479078fc9c08       kube-controller-manager-embed-certs-805185              kube-system
	c0e9c22a25238       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                           18 minutes ago      Running             kube-scheduler                         0                   49e7ef6075ae3       kube-scheduler-embed-certs-805185                       kube-system
	e4f794af7924e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                           18 minutes ago      Running             etcd                                   0                   c8ef977665655       etcd-embed-certs-805185                                 kube-system
	fa9a88171fdc7       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                           18 minutes ago      Running             kube-apiserver                         0                   d92a0248993ee       kube-apiserver-embed-certs-805185                       kube-system
	
	
	==> coredns [37fd60f84cab5a40d06b06eda266df17eadd8d0a9ee56f7b235782087ec0083a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40097 - 29931 "HINFO IN 2735309851509519627.415811791505313667. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.415024708s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-805185
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-805185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=embed-certs-805185
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_04_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:04:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-805185
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:24:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:24:15 +0000   Fri, 19 Dec 2025 03:04:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:24:15 +0000   Fri, 19 Dec 2025 03:04:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:24:15 +0000   Fri, 19 Dec 2025 03:04:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:24:15 +0000   Fri, 19 Dec 2025 03:05:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-805185
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                e529c61b-35ad-4151-ab38-525026482d8c
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-8gphx                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-embed-certs-805185                                  100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-jj9ms                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-embed-certs-805185                        250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-embed-certs-805185               200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-p8pqg                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-embed-certs-805185                        100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        kubernetes-dashboard-api-78bc857d5c-fljnp                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-auth-658884f98f-455ns               100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-9p6zf                0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-gfhnn                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1250m (15%)  1100m (13%)
	  memory             1020Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node embed-certs-805185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node embed-certs-805185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m                kubelet          Node embed-certs-805185 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-805185 event: Registered Node embed-certs-805185 in Controller
	  Normal  NodeReady                19m                kubelet          Node embed-certs-805185 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node embed-certs-805185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node embed-certs-805185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node embed-certs-805185 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node embed-certs-805185 event: Registered Node embed-certs-805185 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [e4f794af7924e48700f3eb1f53c1070c15bc99d17539d5f097c1a7c62dded81f] <==
	{"level":"warn","ts":"2025-12-19T03:05:53.719221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.745613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.755575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.779584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:53.825911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.666523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.686420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.703183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.714636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.724682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.735837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.746037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.755589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.784157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.802436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:05:57.825473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58754","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T03:06:04.808381Z","caller":"traceutil/trace.go:172","msg":"trace[24513416] transaction","detail":"{read_only:false; response_revision:699; number_of_response:1; }","duration":"118.600036ms","start":"2025-12-19T03:06:04.689759Z","end":"2025-12-19T03:06:04.808359Z","steps":["trace[24513416] 'process raft request'  (duration: 118.551956ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:06:04.808596Z","caller":"traceutil/trace.go:172","msg":"trace[1604688651] transaction","detail":"{read_only:false; response_revision:698; number_of_response:1; }","duration":"178.640288ms","start":"2025-12-19T03:06:04.629933Z","end":"2025-12-19T03:06:04.808573Z","steps":["trace[1604688651] 'process raft request'  (duration: 128.977486ms)","trace[1604688651] 'compare'  (duration: 49.259539ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:06:10.029004Z","caller":"traceutil/trace.go:172","msg":"trace[1715983664] transaction","detail":"{read_only:false; response_revision:712; number_of_response:1; }","duration":"117.29944ms","start":"2025-12-19T03:06:09.911684Z","end":"2025-12-19T03:06:10.028983Z","steps":["trace[1715983664] 'process raft request'  (duration: 95.039156ms)","trace[1715983664] 'compare'  (duration: 21.881704ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:15:53.166470Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":959}
	{"level":"info","ts":"2025-12-19T03:15:53.173813Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":959,"took":"6.970165ms","hash":136659999,"current-db-size-bytes":3895296,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":3895296,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2025-12-19T03:15:53.173870Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":136659999,"revision":959,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:20:53.171463Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1202}
	{"level":"info","ts":"2025-12-19T03:20:53.173821Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1202,"took":"1.992974ms","hash":2951296099,"current-db-size-bytes":3895296,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":2015232,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-12-19T03:20:53.173858Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2951296099,"revision":1202,"compact-revision":959}
	
	
	==> kernel <==
	 03:24:42 up  1:07,  0 user,  load average: 1.94, 0.92, 1.24
	Linux embed-certs-805185 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9734264bc03165e973381a11181db3d0d85532eb608a1d648d545affcc0f5657] <==
	I1219 03:22:35.868429       1 main.go:301] handling current node
	I1219 03:22:45.867952       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:22:45.867995       1 main.go:301] handling current node
	I1219 03:22:55.871868       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:22:55.871903       1 main.go:301] handling current node
	I1219 03:23:05.872806       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:23:05.872843       1 main.go:301] handling current node
	I1219 03:23:15.868177       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:23:15.868210       1 main.go:301] handling current node
	I1219 03:23:25.867534       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:23:25.867573       1 main.go:301] handling current node
	I1219 03:23:35.867892       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:23:35.867944       1 main.go:301] handling current node
	I1219 03:23:45.874749       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:23:45.874784       1 main.go:301] handling current node
	I1219 03:23:55.871842       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:23:55.871874       1 main.go:301] handling current node
	I1219 03:24:05.867919       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:24:05.867959       1 main.go:301] handling current node
	I1219 03:24:15.868601       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:24:15.868645       1 main.go:301] handling current node
	I1219 03:24:25.868249       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:24:25.868398       1 main.go:301] handling current node
	I1219 03:24:35.867612       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1219 03:24:35.867672       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fa9a88171fdc75e01df96259a9096dab5e5ab76217553f36b6a9922f9e0f06fe] <==
	W1219 03:05:57.666179       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 03:05:57.686342       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.703087       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.714554       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.724651       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 03:05:57.735825       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.745925       1 logging.go:55] [core] [Channel #283 SubChannel #284]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.755549       1 logging.go:55] [core] [Channel #287 SubChannel #288]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.773268       1 logging.go:55] [core] [Channel #291 SubChannel #292]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.784117       1 logging.go:55] [core] [Channel #295 SubChannel #296]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1219 03:05:57.795282       1 controller.go:667] quota admission added evaluator for: endpoints
	W1219 03:05:57.802417       1 logging.go:55] [core] [Channel #299 SubChannel #300]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:05:57.819295       1 logging.go:55] [core] [Channel #303 SubChannel #304]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1219 03:05:57.894304       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:05:57.991073       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:05:58.143944       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 03:05:58.544436       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 03:05:58.579983       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:05:58.584890       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:05:58.595427       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.101.245.250"}
	I1219 03:05:58.600356       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.97.48.46"}
	I1219 03:05:58.604096       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.96.197.102"}
	I1219 03:05:58.610018       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.99.175"}
	I1219 03:05:58.616775       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.106.250.73"}
	I1219 03:15:54.401313       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [dca8f84f406b7acd8227404694ece4fd29d232591939f26e4325c52e7c00de60] <==
	I1219 03:05:57.736964       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1219 03:05:57.737011       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1219 03:05:57.737131       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1219 03:05:57.737248       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1219 03:05:57.737588       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 03:05:57.737617       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1219 03:05:57.738742       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 03:05:57.738773       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 03:05:57.738742       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1219 03:05:57.744005       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1219 03:05:57.744039       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1219 03:05:57.744147       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1219 03:05:57.744203       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1219 03:05:57.744212       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1219 03:05:57.744220       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1219 03:05:57.746255       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1219 03:05:57.747424       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1219 03:05:57.753898       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1219 03:05:57.755198       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 03:05:58.841753       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:05:58.868581       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:05:58.874821       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:05:58.881981       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:05:58.882003       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 03:05:58.882012       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [3e6a9f16432bb2d0f57c9e657b776eaae753f9a9bc474bcd825b022f2cf4726b] <==
	I1219 03:05:55.448309       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:05:55.528222       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:05:55.628850       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:05:55.628898       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1219 03:05:55.629015       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:05:55.649512       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:05:55.649574       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:05:55.655220       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:05:55.655665       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:05:55.655695       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:55.657141       1 config.go:200] "Starting service config controller"
	I1219 03:05:55.657618       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:05:55.657697       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:05:55.657751       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:05:55.658014       1 config.go:309] "Starting node config controller"
	I1219 03:05:55.658027       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:05:55.658041       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:05:55.658491       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:05:55.658532       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:05:55.757856       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:05:55.759651       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:05:55.759720       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c0e9c22a2523807e95fb727795c040c95c5bd029feb66a6a92f7087e4503774e] <==
	I1219 03:05:53.750115       1 serving.go:386] Generated self-signed cert in-memory
	I1219 03:05:54.696153       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:05:54.696180       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:05:54.700571       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:54.700567       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1219 03:05:54.700623       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:54.700627       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1219 03:05:54.700603       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:05:54.700660       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:05:54.701061       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:05:54.701240       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:05:54.801652       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:05:54.801670       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 03:05:54.801652       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785080     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-474hq\" (UniqueName: \"kubernetes.io/projected/c86f0bcd-5dae-4bb5-b1c2-bd9a3bea9060-kube-api-access-474hq\") pod \"kubernetes-dashboard-auth-658884f98f-455ns\" (UID: \"c86f0bcd-5dae-4bb5-b1c2-bd9a3bea9060\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-658884f98f-455ns"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785095     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ab309a53-9e4b-4a01-899a-797c7ba5208d-tmp-volume\") pod \"kubernetes-dashboard-api-78bc857d5c-fljnp\" (UID: \"ab309a53-9e4b-4a01-899a-797c7ba5208d\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-78bc857d5c-fljnp"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785116     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zzfm\" (UniqueName: \"kubernetes.io/projected/ab309a53-9e4b-4a01-899a-797c7ba5208d-kube-api-access-6zzfm\") pod \"kubernetes-dashboard-api-78bc857d5c-fljnp\" (UID: \"ab309a53-9e4b-4a01-899a-797c7ba5208d\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-78bc857d5c-fljnp"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785138     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f73d26a9-48d2-47fc-a241-1a7504297988-tmp-volume\") pod \"kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr\" (UID: \"f73d26a9-48d2-47fc-a241-1a7504297988\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785164     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7smc\" (UniqueName: \"kubernetes.io/projected/2c9c9b86-fd2a-4420-b98d-27dd078fe2c6-kube-api-access-k7smc\") pod \"kubernetes-dashboard-web-5c9f966b98-gfhnn\" (UID: \"2c9c9b86-fd2a-4420-b98d-27dd078fe2c6\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-gfhnn"
	Dec 19 03:05:58 embed-certs-805185 kubelet[737]: I1219 03:05:58.785222     737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-tmp\" (UniqueName: \"kubernetes.io/empty-dir/30a45022-1901-4ea6-8857-08ff9a85c27a-kubernetes-dashboard-kong-tmp\") pod \"kubernetes-dashboard-kong-9849c64bd-9p6zf\" (UID: \"30a45022-1901-4ea6-8857-08ff9a85c27a\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf"
	Dec 19 03:05:59 embed-certs-805185 kubelet[737]: I1219 03:05:59.997824     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:05:59 embed-certs-805185 kubelet[737]: I1219 03:05:59.997922     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:00 embed-certs-805185 kubelet[737]: I1219 03:06:00.037195     737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 19 03:06:00 embed-certs-805185 kubelet[737]: I1219 03:06:00.097959     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-api-78bc857d5c-fljnp" podStartSLOduration=1.09098601 podStartE2EDuration="2.097935412s" podCreationTimestamp="2025-12-19 03:05:58 +0000 UTC" firstStartedPulling="2025-12-19 03:05:58.990618466 +0000 UTC m=+7.051227125" lastFinishedPulling="2025-12-19 03:05:59.997567856 +0000 UTC m=+8.058176527" observedRunningTime="2025-12-19 03:06:00.097689886 +0000 UTC m=+8.158298580" watchObservedRunningTime="2025-12-19 03:06:00.097935412 +0000 UTC m=+8.158544082"
	Dec 19 03:06:00 embed-certs-805185 kubelet[737]: I1219 03:06:00.934970     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:00 embed-certs-805185 kubelet[737]: I1219 03:06:00.936003     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:02 embed-certs-805185 kubelet[737]: I1219 03:06:02.793612     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-7fwjr" podStartSLOduration=2.864491069 podStartE2EDuration="4.793587364s" podCreationTimestamp="2025-12-19 03:05:58 +0000 UTC" firstStartedPulling="2025-12-19 03:05:59.005628182 +0000 UTC m=+7.066236856" lastFinishedPulling="2025-12-19 03:06:00.934724484 +0000 UTC m=+8.995333151" observedRunningTime="2025-12-19 03:06:01.111916375 +0000 UTC m=+9.172525051" watchObservedRunningTime="2025-12-19 03:06:02.793587364 +0000 UTC m=+10.854196040"
	Dec 19 03:06:04 embed-certs-805185 kubelet[737]: I1219 03:06:04.028076     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:04 embed-certs-805185 kubelet[737]: I1219 03:06:04.028167     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:04 embed-certs-805185 kubelet[737]: I1219 03:06:04.121599     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-gfhnn" podStartSLOduration=1.100576683 podStartE2EDuration="6.121572519s" podCreationTimestamp="2025-12-19 03:05:58 +0000 UTC" firstStartedPulling="2025-12-19 03:05:59.006841332 +0000 UTC m=+7.067449988" lastFinishedPulling="2025-12-19 03:06:04.027837166 +0000 UTC m=+12.088445824" observedRunningTime="2025-12-19 03:06:04.121201067 +0000 UTC m=+12.181809743" watchObservedRunningTime="2025-12-19 03:06:04.121572519 +0000 UTC m=+12.182181195"
	Dec 19 03:06:05 embed-certs-805185 kubelet[737]: I1219 03:06:05.244202     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:05 embed-certs-805185 kubelet[737]: I1219 03:06:05.244300     737 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:06 embed-certs-805185 kubelet[737]: I1219 03:06:06.135487     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-auth-658884f98f-455ns" podStartSLOduration=1.904186191 podStartE2EDuration="8.135456486s" podCreationTimestamp="2025-12-19 03:05:58 +0000 UTC" firstStartedPulling="2025-12-19 03:05:59.012692427 +0000 UTC m=+7.073301081" lastFinishedPulling="2025-12-19 03:06:05.243962705 +0000 UTC m=+13.304571376" observedRunningTime="2025-12-19 03:06:06.134881051 +0000 UTC m=+14.195489728" watchObservedRunningTime="2025-12-19 03:06:06.135456486 +0000 UTC m=+14.196065161"
	Dec 19 03:06:12 embed-certs-805185 kubelet[737]: I1219 03:06:12.162006     737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-9p6zf" podStartSLOduration=2.749011678 podStartE2EDuration="14.161975971s" podCreationTimestamp="2025-12-19 03:05:58 +0000 UTC" firstStartedPulling="2025-12-19 03:05:59.023057738 +0000 UTC m=+7.083666406" lastFinishedPulling="2025-12-19 03:06:10.436022033 +0000 UTC m=+18.496630699" observedRunningTime="2025-12-19 03:06:12.161201474 +0000 UTC m=+20.221810169" watchObservedRunningTime="2025-12-19 03:06:12.161975971 +0000 UTC m=+20.222584647"
	Dec 19 03:06:26 embed-certs-805185 kubelet[737]: I1219 03:06:26.182763     737 scope.go:117] "RemoveContainer" containerID="3df3cb787711062528813854036926a57363b917a00c81ef68c3b8c0a675cfa2"
	Dec 19 03:24:37 embed-certs-805185 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 19 03:24:37 embed-certs-805185 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 19 03:24:37 embed-certs-805185 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 19 03:24:37 embed-certs-805185 systemd[1]: kubelet.service: Consumed 25.357s CPU time.
	
	
	==> kubernetes-dashboard [310b39bacccabe01a7800d05d30675f93096703212a17f66095da8c1865d22d2] <==
	10.244.0.1 - - [19/Dec/2025:03:22:00 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:06 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:30 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:46 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:56 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:00 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:06 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:30 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:46 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:56 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:00 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:24:06 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:16 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:26 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:30 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:24:36 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	E1219 03:22:01.082390       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:23:01.082525       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:24:01.082114       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	
	
	==> kubernetes-dashboard [5b4f7811505964d9e14b039acff4c61a760a6112e63bfff6242995499ee3b049] <==
	I1219 03:06:00.157650       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:06:00.157768       1 init.go:49] Using in-cluster config
	I1219 03:06:00.158043       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:06:00.158057       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:06:00.158064       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:06:00.158072       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:06:00.164066       1 main.go:119] "Successful initial request to the apiserver" version="v1.34.3"
	I1219 03:06:00.164098       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:06:00.190400       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 03:06:00.190937       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	I1219 03:06:30.196244       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [95cc887c80866d0ea33ef79f7654625e51e2590ee08a32fae89a8d46347f529a] <==
	I1219 03:06:04.155476       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:06:04.155552       1 init.go:48] Using in-cluster config
	I1219 03:06:04.155767       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [a0449cd05686367a0a816405c686858df4a264fbcacf43407705baff34ccbc5a] <==
	I1219 03:06:05.338222       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:06:05.338287       1 init.go:49] Using in-cluster config
	I1219 03:06:05.338471       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [3d7dd245b233f1e33bd4f191102b08c020335fd06eb801c42d94c75e88488904] <==
	W1219 03:24:17.891794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:19.895638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:19.899375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:21.903213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:21.907243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:23.910143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:23.914640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:25.918600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:25.924444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:27.928290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:27.932914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:29.935848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:29.941274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:31.944766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:31.948619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:33.952001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:33.956116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:35.959480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:35.963533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:37.967596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:37.971935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:39.976025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:39.980918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:41.985315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:41.990650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [3df3cb787711062528813854036926a57363b917a00c81ef68c3b8c0a675cfa2] <==
	I1219 03:05:55.403581       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:06:25.407035       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-805185 -n embed-certs-805185
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-805185 -n embed-certs-805185: exit status 2 (343.896478ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-805185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-717222 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-717222 --alsologtostderr -v=1: exit status 80 (1.855200021s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-717222 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 03:24:44.355426  380006 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:24:44.355681  380006 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:44.355690  380006 out.go:374] Setting ErrFile to fd 2...
	I1219 03:24:44.355694  380006 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:44.355935  380006 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:24:44.356176  380006 out.go:368] Setting JSON to false
	I1219 03:24:44.356194  380006 mustload.go:66] Loading cluster: default-k8s-diff-port-717222
	I1219 03:24:44.356532  380006 config.go:182] Loaded profile config "default-k8s-diff-port-717222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 03:24:44.356934  380006 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-717222 --format={{.State.Status}}
	I1219 03:24:44.375294  380006 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:24:44.375570  380006 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:44.433554  380006 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-19 03:24:44.423004987 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:44.434122  380006 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765965980-22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765965980-22186-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-717222 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1219 03:24:44.435865  380006 out.go:179] * Pausing node default-k8s-diff-port-717222 ... 
	I1219 03:24:44.436901  380006 host.go:66] Checking if "default-k8s-diff-port-717222" exists ...
	I1219 03:24:44.437173  380006 ssh_runner.go:195] Run: systemctl --version
	I1219 03:24:44.437226  380006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-717222
	I1219 03:24:44.454831  380006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/default-k8s-diff-port-717222/id_rsa Username:docker}
	I1219 03:24:44.556930  380006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:24:44.571538  380006 pause.go:52] kubelet running: true
	I1219 03:24:44.571618  380006 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1219 03:24:44.770385  380006 cri.go:57] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1219 03:24:44.770486  380006 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1219 03:24:44.838064  380006 cri.go:92] found id: "d997c9b36079f5a7a989c5b8afff9f0cb04f9dfcbd3977915ee11b3d32ada9a6"
	I1219 03:24:44.838091  380006 cri.go:92] found id: "2592b062e787245c17fcfad40e551290657aea425be5e044174243d7524bc317"
	I1219 03:24:44.838098  380006 cri.go:92] found id: "dbbb6a255de373670b1e2cdd1093e839a1db9c2f1eff3fcb467e13bda9db456d"
	I1219 03:24:44.838103  380006 cri.go:92] found id: "d7b31f6039b4c71a1c774e7e89359f49dd4bca0b72f47cce0a7db10b8a4eb339"
	I1219 03:24:44.838107  380006 cri.go:92] found id: "cd178b86eed6df4e301822d1cb033cde8457245acc5c1565f60ccb12d47ee2aa"
	I1219 03:24:44.838117  380006 cri.go:92] found id: "1340a2f59347d94cb67897c14b11276bb491943caa3b94ed7080db25e5d5dc78"
	I1219 03:24:44.838122  380006 cri.go:92] found id: "725faee3812c5d3136b764571b2c9b270517680beb590d21de31aad0b5d0b89a"
	I1219 03:24:44.838126  380006 cri.go:92] found id: "d2c496c53c69616e93f161ca59c69ebaa3d4de81e94460893270a29e321886eb"
	I1219 03:24:44.838129  380006 cri.go:92] found id: "0fb4e8910a64fc9fe34f2319ccb58566695768a8c137e01b391e3004481cb992"
	I1219 03:24:44.838140  380006 cri.go:92] found id: "dd2d524ddac2329c06c87daf6c7c8f6eb8a85bde1ae5d599415346c87bfae650"
	I1219 03:24:44.838148  380006 cri.go:92] found id: "35d02beeb2185abe39e28a11af6121432270e8cbb693e170c7e4268ee5b3b270"
	I1219 03:24:44.838153  380006 cri.go:92] found id: "5fe7d916a364f331d8aa2665bfdbeab1fff27316fa0fee64cb7834c35bef418d"
	I1219 03:24:44.838162  380006 cri.go:92] found id: "efed0d882497800414676940b84aa41e026026efe618a2d160430de527d8e1f6"
	I1219 03:24:44.838168  380006 cri.go:92] found id: "6e3eff743b9cdb70ef6cbf70a1039d5cff4c8fe2e48d5a15acb23261f2b4507e"
	I1219 03:24:44.838175  380006 cri.go:92] found id: "5c21853c28563a691ef440986410f18c67ba23dbc122b1d94b9cce6075bdfb75"
	I1219 03:24:44.838186  380006 cri.go:92] found id: ""
	I1219 03:24:44.838255  380006 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 03:24:44.849990  380006 retry.go:31] will retry after 359.69258ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:24:44Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:24:45.210628  380006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:24:45.224353  380006 pause.go:52] kubelet running: false
	I1219 03:24:45.224416  380006 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1219 03:24:45.386764  380006 cri.go:57] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1219 03:24:45.386857  380006 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1219 03:24:45.460858  380006 cri.go:92] found id: "d997c9b36079f5a7a989c5b8afff9f0cb04f9dfcbd3977915ee11b3d32ada9a6"
	I1219 03:24:45.460881  380006 cri.go:92] found id: "2592b062e787245c17fcfad40e551290657aea425be5e044174243d7524bc317"
	I1219 03:24:45.460885  380006 cri.go:92] found id: "dbbb6a255de373670b1e2cdd1093e839a1db9c2f1eff3fcb467e13bda9db456d"
	I1219 03:24:45.460888  380006 cri.go:92] found id: "d7b31f6039b4c71a1c774e7e89359f49dd4bca0b72f47cce0a7db10b8a4eb339"
	I1219 03:24:45.460890  380006 cri.go:92] found id: "cd178b86eed6df4e301822d1cb033cde8457245acc5c1565f60ccb12d47ee2aa"
	I1219 03:24:45.460893  380006 cri.go:92] found id: "1340a2f59347d94cb67897c14b11276bb491943caa3b94ed7080db25e5d5dc78"
	I1219 03:24:45.460896  380006 cri.go:92] found id: "725faee3812c5d3136b764571b2c9b270517680beb590d21de31aad0b5d0b89a"
	I1219 03:24:45.460899  380006 cri.go:92] found id: "d2c496c53c69616e93f161ca59c69ebaa3d4de81e94460893270a29e321886eb"
	I1219 03:24:45.460901  380006 cri.go:92] found id: "0fb4e8910a64fc9fe34f2319ccb58566695768a8c137e01b391e3004481cb992"
	I1219 03:24:45.460915  380006 cri.go:92] found id: "dd2d524ddac2329c06c87daf6c7c8f6eb8a85bde1ae5d599415346c87bfae650"
	I1219 03:24:45.460918  380006 cri.go:92] found id: "35d02beeb2185abe39e28a11af6121432270e8cbb693e170c7e4268ee5b3b270"
	I1219 03:24:45.460920  380006 cri.go:92] found id: "5fe7d916a364f331d8aa2665bfdbeab1fff27316fa0fee64cb7834c35bef418d"
	I1219 03:24:45.460923  380006 cri.go:92] found id: "efed0d882497800414676940b84aa41e026026efe618a2d160430de527d8e1f6"
	I1219 03:24:45.460926  380006 cri.go:92] found id: "6e3eff743b9cdb70ef6cbf70a1039d5cff4c8fe2e48d5a15acb23261f2b4507e"
	I1219 03:24:45.460941  380006 cri.go:92] found id: "5c21853c28563a691ef440986410f18c67ba23dbc122b1d94b9cce6075bdfb75"
	I1219 03:24:45.460947  380006 cri.go:92] found id: ""
	I1219 03:24:45.461004  380006 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 03:24:45.473413  380006 retry.go:31] will retry after 368.974542ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:24:45Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:24:45.843027  380006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:24:45.858790  380006 pause.go:52] kubelet running: false
	I1219 03:24:45.858845  380006 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1219 03:24:46.052082  380006 cri.go:57] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1219 03:24:46.052149  380006 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1219 03:24:46.128939  380006 cri.go:92] found id: "d997c9b36079f5a7a989c5b8afff9f0cb04f9dfcbd3977915ee11b3d32ada9a6"
	I1219 03:24:46.128963  380006 cri.go:92] found id: "2592b062e787245c17fcfad40e551290657aea425be5e044174243d7524bc317"
	I1219 03:24:46.128970  380006 cri.go:92] found id: "dbbb6a255de373670b1e2cdd1093e839a1db9c2f1eff3fcb467e13bda9db456d"
	I1219 03:24:46.128975  380006 cri.go:92] found id: "d7b31f6039b4c71a1c774e7e89359f49dd4bca0b72f47cce0a7db10b8a4eb339"
	I1219 03:24:46.128979  380006 cri.go:92] found id: "cd178b86eed6df4e301822d1cb033cde8457245acc5c1565f60ccb12d47ee2aa"
	I1219 03:24:46.128985  380006 cri.go:92] found id: "1340a2f59347d94cb67897c14b11276bb491943caa3b94ed7080db25e5d5dc78"
	I1219 03:24:46.128990  380006 cri.go:92] found id: "725faee3812c5d3136b764571b2c9b270517680beb590d21de31aad0b5d0b89a"
	I1219 03:24:46.128995  380006 cri.go:92] found id: "d2c496c53c69616e93f161ca59c69ebaa3d4de81e94460893270a29e321886eb"
	I1219 03:24:46.128999  380006 cri.go:92] found id: "0fb4e8910a64fc9fe34f2319ccb58566695768a8c137e01b391e3004481cb992"
	I1219 03:24:46.129007  380006 cri.go:92] found id: "dd2d524ddac2329c06c87daf6c7c8f6eb8a85bde1ae5d599415346c87bfae650"
	I1219 03:24:46.129016  380006 cri.go:92] found id: "35d02beeb2185abe39e28a11af6121432270e8cbb693e170c7e4268ee5b3b270"
	I1219 03:24:46.129020  380006 cri.go:92] found id: "5fe7d916a364f331d8aa2665bfdbeab1fff27316fa0fee64cb7834c35bef418d"
	I1219 03:24:46.129028  380006 cri.go:92] found id: "efed0d882497800414676940b84aa41e026026efe618a2d160430de527d8e1f6"
	I1219 03:24:46.129033  380006 cri.go:92] found id: "6e3eff743b9cdb70ef6cbf70a1039d5cff4c8fe2e48d5a15acb23261f2b4507e"
	I1219 03:24:46.129040  380006 cri.go:92] found id: "5c21853c28563a691ef440986410f18c67ba23dbc122b1d94b9cce6075bdfb75"
	I1219 03:24:46.129048  380006 cri.go:92] found id: ""
	I1219 03:24:46.129096  380006 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 03:24:46.144290  380006 out.go:203] 
	W1219 03:24:46.145651  380006 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:24:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:24:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 03:24:46.145676  380006 out.go:285] * 
	* 
	W1219 03:24:46.150852  380006 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 03:24:46.152584  380006 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-717222 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-717222
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-717222:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59",
	        "Created": "2025-12-19T03:04:47.206515223Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 352308,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:05:53.385310779Z",
	            "FinishedAt": "2025-12-19T03:05:52.262245388Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59/hostname",
	        "HostsPath": "/var/lib/docker/containers/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59/hosts",
	        "LogPath": "/var/lib/docker/containers/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59-json.log",
	        "Name": "/default-k8s-diff-port-717222",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-717222:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-717222",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59",
	                "LowerDir": "/var/lib/docker/overlay2/73dc0bcaa024ed1b32e53abc59282bb71441b29d555c8c9ed18118be21650e76-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/73dc0bcaa024ed1b32e53abc59282bb71441b29d555c8c9ed18118be21650e76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/73dc0bcaa024ed1b32e53abc59282bb71441b29d555c8c9ed18118be21650e76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/73dc0bcaa024ed1b32e53abc59282bb71441b29d555c8c9ed18118be21650e76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-717222",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-717222/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-717222",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-717222",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-717222",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9d06f7aea24e94d05365ef4f03fb5f64c6b5272dae79bd49619bd1821269410e",
	            "SandboxKey": "/var/run/docker/netns/9d06f7aea24e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-717222": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "61bece957d17b845e006f35e9e337693d4d396daf2e4f93e70692be3f3288cbb",
	                    "EndpointID": "2c278581ff3b356f6bebafb94e691fc066cab71fa7bdd973be671471a23efca1",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ae:9c:c1:61:6a:e8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-717222",
	                        "f8284300a033"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-717222 -n default-k8s-diff-port-717222
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-717222 -n default-k8s-diff-port-717222: exit status 2 (351.324347ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-717222 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-717222 logs -n 25: (1.212277498s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-717222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-717222 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-805185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-717222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ image   │ old-k8s-version-433330 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p old-k8s-version-433330 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ image   │ no-preload-278042 image list --format=json                                                                                                                                                                                                         │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p no-preload-278042 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ delete  │ -p old-k8s-version-433330                                                                                                                                                                                                                          │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p old-k8s-version-433330                                                                                                                                                                                                                          │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ start   │ -p newest-cni-837172 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p no-preload-278042                                                                                                                                                                                                                               │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p no-preload-278042                                                                                                                                                                                                                               │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ addons  │ enable metrics-server -p newest-cni-837172 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	│ stop    │ -p newest-cni-837172 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ image   │ embed-certs-805185 image list --format=json                                                                                                                                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ pause   │ -p embed-certs-805185 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	│ delete  │ -p embed-certs-805185                                                                                                                                                                                                                              │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ image   │ default-k8s-diff-port-717222 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ pause   │ -p default-k8s-diff-port-717222 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	│ delete  │ -p embed-certs-805185                                                                                                                                                                                                                              │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ addons  │ enable dashboard -p newest-cni-837172 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ start   │ -p newest-cni-837172 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:24:46
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:24:46.186425  380735 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:24:46.186684  380735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:46.186694  380735 out.go:374] Setting ErrFile to fd 2...
	I1219 03:24:46.186711  380735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:46.186932  380735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:24:46.187393  380735 out.go:368] Setting JSON to false
	I1219 03:24:46.188519  380735 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4037,"bootTime":1766110649,"procs":278,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:24:46.188572  380735 start.go:143] virtualization: kvm guest
	I1219 03:24:46.190437  380735 out.go:179] * [newest-cni-837172] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:24:46.191829  380735 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:24:46.191879  380735 notify.go:221] Checking for updates...
	I1219 03:24:46.194410  380735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:24:46.195933  380735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:24:46.197315  380735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:24:46.198516  380735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:24:46.199874  380735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:24:46.201738  380735 config.go:182] Loaded profile config "newest-cni-837172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:46.202513  380735 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:24:46.231628  380735 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:24:46.231787  380735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:46.296408  380735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-19 03:24:46.285802205 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:46.296560  380735 docker.go:319] overlay module found
	I1219 03:24:46.300911  380735 out.go:179] * Using the docker driver based on existing profile
	I1219 03:24:46.302079  380735 start.go:309] selected driver: docker
	I1219 03:24:46.302097  380735 start.go:928] validating driver "docker" against &{Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:46.302197  380735 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:24:46.302844  380735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:46.360231  380735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-19 03:24:46.349155163 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:46.360633  380735 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:24:46.360678  380735 cni.go:84] Creating CNI manager for ""
	I1219 03:24:46.360796  380735 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:24:46.360862  380735 start.go:353] cluster config:
	{Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:46.363384  380735 out.go:179] * Starting "newest-cni-837172" primary control-plane node in "newest-cni-837172" cluster
	I1219 03:24:46.364575  380735 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:24:46.365748  380735 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:24:46.366784  380735 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:24:46.366837  380735 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1219 03:24:46.366858  380735 cache.go:65] Caching tarball of preloaded images
	I1219 03:24:46.366856  380735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:24:46.366954  380735 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:24:46.366968  380735 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1219 03:24:46.367086  380735 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/config.json ...
	I1219 03:24:46.387214  380735 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:24:46.387233  380735 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:24:46.387251  380735 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:24:46.387281  380735 start.go:360] acquireMachinesLock for newest-cni-837172: {Name:mk7db0c3e7d0fad2e8a414af15e136d817446719 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:24:46.387335  380735 start.go:364] duration metric: took 36.004µs to acquireMachinesLock for "newest-cni-837172"
	I1219 03:24:46.387352  380735 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:24:46.387359  380735 fix.go:54] fixHost starting: 
	I1219 03:24:46.387582  380735 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:46.405829  380735 fix.go:112] recreateIfNeeded on newest-cni-837172: state=Stopped err=<nil>
	W1219 03:24:46.405867  380735 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.105809849Z" level=info msg="Created container 35d02beeb2185abe39e28a11af6121432270e8cbb693e170c7e4268ee5b3b270: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq/clear-stale-pid" id=5a645826-349a-438a-8096-df1ef85fa13f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.106574675Z" level=info msg="Starting container: 35d02beeb2185abe39e28a11af6121432270e8cbb693e170c7e4268ee5b3b270" id=cac0719a-4055-47b5-9cd1-db557fa3cd0a name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.108867589Z" level=info msg="Started container" PID=1966 containerID=35d02beeb2185abe39e28a11af6121432270e8cbb693e170c7e4268ee5b3b270 description=kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq/clear-stale-pid id=cac0719a-4055-47b5-9cd1-db557fa3cd0a name=/runtime.v1.RuntimeService/StartContainer sandboxID=8667e85e98055a9a632295b2700a5f7426d8a381a2d0a88f63c9f25413fcbb91
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.244843017Z" level=info msg="Checking image status: kong:3.9" id=0cec8e99-8e10-454e-875b-ea15d4a209cd name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.245030729Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.247083766Z" level=info msg="Checking image status: kong:3.9" id=3f2254f1-a52b-4104-87c2-661e1bd23ec3 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.247306541Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.25336671Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq/proxy" id=94e64d89-1339-4146-8f9c-53f734cb5589 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.253525887Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.260510197Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.261326368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.301363315Z" level=info msg="Created container dd2d524ddac2329c06c87daf6c7c8f6eb8a85bde1ae5d599415346c87bfae650: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq/proxy" id=94e64d89-1339-4146-8f9c-53f734cb5589 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.30215616Z" level=info msg="Starting container: dd2d524ddac2329c06c87daf6c7c8f6eb8a85bde1ae5d599415346c87bfae650" id=12ee788e-8289-4593-8fc9-302f77b8987b name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.304379149Z" level=info msg="Started container" PID=1977 containerID=dd2d524ddac2329c06c87daf6c7c8f6eb8a85bde1ae5d599415346c87bfae650 description=kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq/proxy id=12ee788e-8289-4593-8fc9-302f77b8987b name=/runtime.v1.RuntimeService/StartContainer sandboxID=8667e85e98055a9a632295b2700a5f7426d8a381a2d0a88f63c9f25413fcbb91
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.293364694Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7a2b6641-2330-4f1c-8ac3-bd5fc486ac9a name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.294343816Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=25406107-20f3-4be8-a6d5-7899eb74be0f name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.295572666Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ee62b1e2-d30f-4a02-8252-3cb5cb6d1371 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.295760296Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.302496713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.302683962Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/79e095d41c86d78a8608ae66d21f2385ceb6046ec971057f76b5fefa76ae5f30/merged/etc/passwd: no such file or directory"
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.302750865Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/79e095d41c86d78a8608ae66d21f2385ceb6046ec971057f76b5fefa76ae5f30/merged/etc/group: no such file or directory"
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.303093477Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.338341513Z" level=info msg="Created container d997c9b36079f5a7a989c5b8afff9f0cb04f9dfcbd3977915ee11b3d32ada9a6: kube-system/storage-provisioner/storage-provisioner" id=ee62b1e2-d30f-4a02-8252-3cb5cb6d1371 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.339046763Z" level=info msg="Starting container: d997c9b36079f5a7a989c5b8afff9f0cb04f9dfcbd3977915ee11b3d32ada9a6" id=43566eeb-3dc8-4778-bebc-e3a50bf31b5d name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.341081965Z" level=info msg="Started container" PID=3395 containerID=d997c9b36079f5a7a989c5b8afff9f0cb04f9dfcbd3977915ee11b3d32ada9a6 description=kube-system/storage-provisioner/storage-provisioner id=43566eeb-3dc8-4778-bebc-e3a50bf31b5d name=/runtime.v1.RuntimeService/StartContainer sandboxID=470b7f13281e4c61793ea7eeab1f00af8c464b75a182af8abe8a9e8fcfc00b9a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	d997c9b36079f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Running             storage-provisioner                    1                   470b7f13281e4       storage-provisioner                                     kube-system
	dd2d524ddac23       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                           18 minutes ago      Running             proxy                                  0                   8667e85e98055       kubernetes-dashboard-kong-9849c64bd-jnmzq               kubernetes-dashboard
	35d02beeb2185       docker.io/library/kong@sha256:73ac10ce4d2c5b3b8b4acd6c8117b4e72d1a201d95be2d51aeae8324d776a108                             18 minutes ago      Exited              clear-stale-pid                        0                   8667e85e98055       kubernetes-dashboard-kong-9849c64bd-jnmzq               kubernetes-dashboard
	5fe7d916a364f       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   18 minutes ago      Running             kubernetes-dashboard-metrics-scraper   0                   8df1f8a8e9b8c       kubernetes-dashboard-metrics-scraper-7685fd8b77-pwmcj   kubernetes-dashboard
	efed0d8824978       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               18 minutes ago      Running             kubernetes-dashboard-web               0                   85c0932639a7f       kubernetes-dashboard-web-5c9f966b98-pmb5t               kubernetes-dashboard
	6e3eff743b9cd       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              18 minutes ago      Running             kubernetes-dashboard-auth              0                   226a7334560d4       kubernetes-dashboard-auth-76bb77b695-58swx              kubernetes-dashboard
	5c21853c28563       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               18 minutes ago      Running             kubernetes-dashboard-api               0                   442cfc6f80155       kubernetes-dashboard-api-6c4454678d-vmnj2               kubernetes-dashboard
	561ec43405227       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                                           18 minutes ago      Running             busybox                                1                   bdce9bd9d632c       busybox                                                 default
	2592b062e7872       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                           18 minutes ago      Running             coredns                                0                   ad0fcb07810bf       coredns-66bc5c9577-dskxl                                kube-system
	dbbb6a255de37       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Exited              storage-provisioner                    0                   470b7f13281e4       storage-provisioner                                     kube-system
	d7b31f6039b4c       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                                           18 minutes ago      Running             kindnet-cni                            0                   42aa8ce5cba75       kindnet-zgcrn                                           kube-system
	cd178b86eed6d       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                           18 minutes ago      Running             kube-proxy                             0                   84cdb0361e2e6       kube-proxy-mr7c8                                        kube-system
	1340a2f59347d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                           18 minutes ago      Running             etcd                                   0                   ccb6ae903ae17       etcd-default-k8s-diff-port-717222                       kube-system
	725faee3812c5       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                           18 minutes ago      Running             kube-scheduler                         0                   2ad392cb5e514       kube-scheduler-default-k8s-diff-port-717222             kube-system
	d2c496c53c696       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                           18 minutes ago      Running             kube-apiserver                         0                   ec833bb6abd84       kube-apiserver-default-k8s-diff-port-717222             kube-system
	0fb4e8910a64f       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                           18 minutes ago      Running             kube-controller-manager                0                   6217f80d4b77a       kube-controller-manager-default-k8s-diff-port-717222    kube-system
	
	
	==> coredns [2592b062e787245c17fcfad40e551290657aea425be5e044174243d7524bc317] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55129 - 16165 "HINFO IN 3453254911344364497.3052208195299777284. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.04385742s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-717222
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-717222
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=default-k8s-diff-port-717222
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_05_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:05:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-717222
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:24:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:24:35 +0000   Fri, 19 Dec 2025 03:04:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:24:35 +0000   Fri, 19 Dec 2025 03:04:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:24:35 +0000   Fri, 19 Dec 2025 03:04:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:24:35 +0000   Fri, 19 Dec 2025 03:05:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-717222
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                301b16dc-31c1-4466-a363-b4e4f9941cd5
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-dskxl                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-default-k8s-diff-port-717222                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-zgcrn                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-default-k8s-diff-port-717222              250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-717222     200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-mr7c8                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-default-k8s-diff-port-717222              100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        kubernetes-dashboard-api-6c4454678d-vmnj2                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-auth-76bb77b695-58swx               100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-jnmzq                0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-pwmcj    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-pmb5t                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1250m (15%)  1100m (13%)
	  memory             1020Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     19m                kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node default-k8s-diff-port-717222 event: Registered Node default-k8s-diff-port-717222 in Controller
	  Normal  NodeReady                19m                kubelet          Node default-k8s-diff-port-717222 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node default-k8s-diff-port-717222 event: Registered Node default-k8s-diff-port-717222 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [1340a2f59347d94cb67897c14b11276bb491943caa3b94ed7080db25e5d5dc78] <==
	{"level":"warn","ts":"2025-12-19T03:06:06.378732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.407834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.459907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.484810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.498580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.516121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.532033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.548224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.567442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.583249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.608694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.623918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49614","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T03:16:01.657050Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":990}
	{"level":"info","ts":"2025-12-19T03:16:01.664840Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":990,"took":"7.456943ms","hash":2471762061,"current-db-size-bytes":3915776,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":3915776,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2025-12-19T03:16:01.664919Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2471762061,"revision":990,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:21:01.662543Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1235}
	{"level":"info","ts":"2025-12-19T03:21:01.664987Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1235,"took":"2.122855ms","hash":87961367,"current-db-size-bytes":3915776,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":2101248,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-12-19T03:21:01.665040Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":87961367,"revision":1235,"compact-revision":990}
	{"level":"info","ts":"2025-12-19T03:24:04.616130Z","caller":"traceutil/trace.go:172","msg":"trace[305349054] transaction","detail":"{read_only:false; response_revision:1632; number_of_response:1; }","duration":"142.323928ms","start":"2025-12-19T03:24:04.473787Z","end":"2025-12-19T03:24:04.616111Z","steps":["trace[305349054] 'process raft request'  (duration: 125.885135ms)","trace[305349054] 'compare'  (duration: 16.341168ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:24:04.788967Z","caller":"traceutil/trace.go:172","msg":"trace[736017107] linearizableReadLoop","detail":"{readStateIndex:1877; appliedIndex:1877; }","duration":"171.212825ms","start":"2025-12-19T03:24:04.617731Z","end":"2025-12-19T03:24:04.788944Z","steps":["trace[736017107] 'read index received'  (duration: 171.200561ms)","trace[736017107] 'applied index is now lower than readState.Index'  (duration: 11.453µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:24:04.789391Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"171.646552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" limit:1 ","response":"range_response_count:1 size:420"}
	{"level":"info","ts":"2025-12-19T03:24:04.789481Z","caller":"traceutil/trace.go:172","msg":"trace[1502582395] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:1632; }","duration":"171.772291ms","start":"2025-12-19T03:24:04.617692Z","end":"2025-12-19T03:24:04.789464Z","steps":["trace[1502582395] 'agreement among raft nodes before linearized reading'  (duration: 171.370456ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:24:04.789526Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.513258ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configuration.konghq.com/konglicenses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:24:04.789556Z","caller":"traceutil/trace.go:172","msg":"trace[331089237] range","detail":"{range_begin:/registry/configuration.konghq.com/konglicenses; range_end:; response_count:0; response_revision:1633; }","duration":"107.546037ms","start":"2025-12-19T03:24:04.682002Z","end":"2025-12-19T03:24:04.789548Z","steps":["trace[331089237] 'agreement among raft nodes before linearized reading'  (duration: 107.495727ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:24:04.789623Z","caller":"traceutil/trace.go:172","msg":"trace[872270148] transaction","detail":"{read_only:false; response_revision:1633; number_of_response:1; }","duration":"218.608589ms","start":"2025-12-19T03:24:04.570993Z","end":"2025-12-19T03:24:04.789601Z","steps":["trace[872270148] 'process raft request'  (duration: 218.073061ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:24:47 up  1:07,  0 user,  load average: 2.02, 0.95, 1.25
	Linux default-k8s-diff-port-717222 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d7b31f6039b4c71a1c774e7e89359f49dd4bca0b72f47cce0a7db10b8a4eb339] <==
	I1219 03:22:44.143465       1 main.go:301] handling current node
	I1219 03:22:54.151094       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:22:54.151132       1 main.go:301] handling current node
	I1219 03:23:04.143666       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:23:04.143740       1 main.go:301] handling current node
	I1219 03:23:14.144759       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:23:14.144798       1 main.go:301] handling current node
	I1219 03:23:24.151061       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:23:24.151091       1 main.go:301] handling current node
	I1219 03:23:34.143600       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:23:34.143635       1 main.go:301] handling current node
	I1219 03:23:44.142928       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:23:44.142982       1 main.go:301] handling current node
	I1219 03:23:54.143360       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:23:54.143398       1 main.go:301] handling current node
	I1219 03:24:04.151416       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:24:04.151454       1 main.go:301] handling current node
	I1219 03:24:14.147032       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:24:14.147066       1 main.go:301] handling current node
	I1219 03:24:24.143281       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:24:24.143313       1 main.go:301] handling current node
	I1219 03:24:34.148458       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:24:34.148865       1 main.go:301] handling current node
	I1219 03:24:44.147201       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:24:44.147235       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d2c496c53c69616e93f161ca59c69ebaa3d4de81e94460893270a29e321886eb] <==
	I1219 03:06:06.068365       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:06:06.073897       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:06:06.084961       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.107.87.247"}
	I1219 03:06:06.089336       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.220.200"}
	I1219 03:06:06.096055       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.107.37.89"}
	I1219 03:06:06.097724       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.106.126.95"}
	I1219 03:06:06.105426       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.103.60.201"}
	I1219 03:06:06.111150       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 03:06:06.111150       1 controller.go:667] quota admission added evaluator for: deployments.apps
	W1219 03:06:06.366398       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.407675       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.460136       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.484666       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.498913       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.516026       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.532002       1 logging.go:55] [core] [Channel #283 SubChannel #284]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.548159       1 logging.go:55] [core] [Channel #287 SubChannel #288]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.564547       1 logging.go:55] [core] [Channel #291 SubChannel #292]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 03:06:06.583215       1 logging.go:55] [core] [Channel #295 SubChannel #296]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1219 03:06:06.599243       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	W1219 03:06:06.606221       1 logging.go:55] [core] [Channel #299 SubChannel #300]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.623365       1 logging.go:55] [core] [Channel #303 SubChannel #304]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1219 03:06:06.946827       1 controller.go:667] quota admission added evaluator for: endpoints
	I1219 03:06:07.061226       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:16:03.036227       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [0fb4e8910a64fc9fe34f2319ccb58566695768a8c137e01b391e3004481cb992] <==
	I1219 03:06:06.443886       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 03:06:06.448122       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1219 03:06:06.448186       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1219 03:06:06.448203       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1219 03:06:06.448213       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1219 03:06:06.465415       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1219 03:06:06.465574       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1219 03:06:06.465610       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1219 03:06:06.465621       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1219 03:06:06.465629       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1219 03:06:06.469733       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1219 03:06:06.472102       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1219 03:06:06.475316       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1219 03:06:06.478047       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 03:06:06.492013       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1219 03:06:06.492117       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1219 03:06:06.492629       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1219 03:06:06.493189       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1219 03:06:06.493873       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 03:06:07.594172       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:06:07.650019       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:06:07.681489       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:06:07.691740       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:06:07.691828       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 03:06:07.691843       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [cd178b86eed6df4e301822d1cb033cde8457245acc5c1565f60ccb12d47ee2aa] <==
	I1219 03:06:03.629338       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:06:03.701880       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:06:03.802296       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:06:03.802339       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1219 03:06:03.802448       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:06:03.830859       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:06:03.830933       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:06:03.839110       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:06:03.840168       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:06:03.840214       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:06:03.842696       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:06:03.842727       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:06:03.842694       1 config.go:309] "Starting node config controller"
	I1219 03:06:03.842762       1 config.go:200] "Starting service config controller"
	I1219 03:06:03.842769       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:06:03.842768       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:06:03.842972       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:06:03.843007       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:06:03.942900       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:06:03.942899       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:06:03.942907       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:06:03.943205       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [725faee3812c5d3136b764571b2c9b270517680beb590d21de31aad0b5d0b89a] <==
	I1219 03:06:01.472873       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:06:03.026871       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:06:03.026986       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1219 03:06:03.027002       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:06:03.027011       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:06:03.089314       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:06:03.089358       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:06:03.093055       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:06:03.093084       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:06:03.093364       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:06:03.094336       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:06:03.193871       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067872     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lhhh\" (UniqueName: \"kubernetes.io/projected/af7e569e-9279-40a6-aa17-cda231d867a2-kube-api-access-4lhhh\") pod \"kubernetes-dashboard-web-5c9f966b98-pmb5t\" (UID: \"af7e569e-9279-40a6-aa17-cda231d867a2\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-pmb5t"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067900     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmswx\" (UniqueName: \"kubernetes.io/projected/24aef03d-85db-4df3-a193-f13c807f84de-kube-api-access-bmswx\") pod \"kubernetes-dashboard-auth-76bb77b695-58swx\" (UID: \"24aef03d-85db-4df3-a193-f13c807f84de\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-76bb77b695-58swx"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067924     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b73c0a8a-5571-4af3-bd91-2ce3ab1f7b8e-tmp-volume\") pod \"kubernetes-dashboard-api-6c4454678d-vmnj2\" (UID: \"b73c0a8a-5571-4af3-bd91-2ce3ab1f7b8e\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-6c4454678d-vmnj2"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067959     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/af7e569e-9279-40a6-aa17-cda231d867a2-tmp-volume\") pod \"kubernetes-dashboard-web-5c9f966b98-pmb5t\" (UID: \"af7e569e-9279-40a6-aa17-cda231d867a2\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-pmb5t"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.068002     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/24aef03d-85db-4df3-a193-f13c807f84de-tmp-volume\") pod \"kubernetes-dashboard-auth-76bb77b695-58swx\" (UID: \"24aef03d-85db-4df3-a193-f13c807f84de\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-76bb77b695-58swx"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.068024     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9f54900a-1ad0-4593-8236-0a1dc1a88e64-tmp-volume\") pod \"kubernetes-dashboard-metrics-scraper-7685fd8b77-pwmcj\" (UID: \"9f54900a-1ad0-4593-8236-0a1dc1a88e64\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-pwmcj"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.110436     727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 19 03:06:08 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:08.735645     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:08 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:08.735776     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:09 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:09.227142     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-api-6c4454678d-vmnj2" podStartSLOduration=0.849461056 podStartE2EDuration="2.227114712s" podCreationTimestamp="2025-12-19 03:06:07 +0000 UTC" firstStartedPulling="2025-12-19 03:06:07.357732164 +0000 UTC m=+7.304652030" lastFinishedPulling="2025-12-19 03:06:08.735385823 +0000 UTC m=+8.682305686" observedRunningTime="2025-12-19 03:06:09.226299035 +0000 UTC m=+9.173218910" watchObservedRunningTime="2025-12-19 03:06:09.227114712 +0000 UTC m=+9.174034588"
	Dec 19 03:06:10 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:10.419464     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:10 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:10.419559     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:11 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:11.234033     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-auth-76bb77b695-58swx" podStartSLOduration=1.191233274 podStartE2EDuration="4.234006036s" podCreationTimestamp="2025-12-19 03:06:07 +0000 UTC" firstStartedPulling="2025-12-19 03:06:07.376415045 +0000 UTC m=+7.323334914" lastFinishedPulling="2025-12-19 03:06:10.419187817 +0000 UTC m=+10.366107676" observedRunningTime="2025-12-19 03:06:11.233777792 +0000 UTC m=+11.180697668" watchObservedRunningTime="2025-12-19 03:06:11.234006036 +0000 UTC m=+11.180925911"
	Dec 19 03:06:13 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:13.311379     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:13 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:13.311529     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:14 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:14.115193     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:14 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:14.115296     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:14 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:14.241972     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-pwmcj" podStartSLOduration=0.508150908 podStartE2EDuration="7.241948013s" podCreationTimestamp="2025-12-19 03:06:07 +0000 UTC" firstStartedPulling="2025-12-19 03:06:07.38113198 +0000 UTC m=+7.328051833" lastFinishedPulling="2025-12-19 03:06:14.11492908 +0000 UTC m=+14.061848938" observedRunningTime="2025-12-19 03:06:14.24166888 +0000 UTC m=+14.188588771" watchObservedRunningTime="2025-12-19 03:06:14.241948013 +0000 UTC m=+14.188867888"
	Dec 19 03:06:14 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:14.255081     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-pmb5t" podStartSLOduration=1.322160186 podStartE2EDuration="7.255055586s" podCreationTimestamp="2025-12-19 03:06:07 +0000 UTC" firstStartedPulling="2025-12-19 03:06:07.378248795 +0000 UTC m=+7.325168663" lastFinishedPulling="2025-12-19 03:06:13.311144187 +0000 UTC m=+13.258064063" observedRunningTime="2025-12-19 03:06:14.254652221 +0000 UTC m=+14.201572121" watchObservedRunningTime="2025-12-19 03:06:14.255055586 +0000 UTC m=+14.201975462"
	Dec 19 03:06:19 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:19.265507     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq" podStartSLOduration=1.591075171 podStartE2EDuration="12.26547879s" podCreationTimestamp="2025-12-19 03:06:07 +0000 UTC" firstStartedPulling="2025-12-19 03:06:07.391768736 +0000 UTC m=+7.338688592" lastFinishedPulling="2025-12-19 03:06:18.066172352 +0000 UTC m=+18.013092211" observedRunningTime="2025-12-19 03:06:19.265420913 +0000 UTC m=+19.212340789" watchObservedRunningTime="2025-12-19 03:06:19.26547879 +0000 UTC m=+19.212398667"
	Dec 19 03:06:34 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:34.292974     727 scope.go:117] "RemoveContainer" containerID="dbbb6a255de373670b1e2cdd1093e839a1db9c2f1eff3fcb467e13bda9db456d"
	Dec 19 03:24:44 default-k8s-diff-port-717222 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 19 03:24:44 default-k8s-diff-port-717222 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 19 03:24:44 default-k8s-diff-port-717222 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 19 03:24:44 default-k8s-diff-port-717222 systemd[1]: kubelet.service: Consumed 24.939s CPU time.
	
	
	==> kubernetes-dashboard [5c21853c28563a691ef440986410f18c67ba23dbc122b1d94b9cce6075bdfb75] <==
	I1219 03:06:08.860787       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:06:08.860900       1 init.go:49] Using in-cluster config
	I1219 03:06:08.861145       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:06:08.861164       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:06:08.861172       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:06:08.861177       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:06:08.868063       1 main.go:119] "Successful initial request to the apiserver" version="v1.34.3"
	I1219 03:06:08.868091       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:06:08.944605       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 03:06:08.948604       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	I1219 03:06:38.953964       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [5fe7d916a364f331d8aa2665bfdbeab1fff27316fa0fee64cb7834c35bef418d] <==
	10.244.0.1 - - [19/Dec/2025:03:22:09 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:12 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:22 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:32 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:39 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:42 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:52 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:02 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:09 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:12 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:22 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:32 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:39 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:42 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:52 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:02 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:09 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:24:12 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:22 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:32 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:39 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:24:42 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	E1219 03:22:14.229684       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:23:14.229789       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:24:14.230171       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	
	
	==> kubernetes-dashboard [6e3eff743b9cdb70ef6cbf70a1039d5cff4c8fe2e48d5a15acb23261f2b4507e] <==
	I1219 03:06:10.539923       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:06:10.540000       1 init.go:49] Using in-cluster config
	I1219 03:06:10.540134       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [efed0d882497800414676940b84aa41e026026efe618a2d160430de527d8e1f6] <==
	I1219 03:06:13.510889       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:06:13.510946       1 init.go:48] Using in-cluster config
	I1219 03:06:13.511172       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [d997c9b36079f5a7a989c5b8afff9f0cb04f9dfcbd3977915ee11b3d32ada9a6] <==
	W1219 03:24:21.902374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:23.906060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:23.911319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:25.915018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:25.919611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:27.923189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:27.928429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:29.932430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:29.936267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:31.939085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:31.946026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:33.949633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:33.953689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:35.956529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:35.961490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:37.964279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:37.968416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:39.971880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:39.976073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:41.979852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:41.987361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:43.990726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:43.995018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:45.998460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:46.005824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dbbb6a255de373670b1e2cdd1093e839a1db9c2f1eff3fcb467e13bda9db456d] <==
	I1219 03:06:03.592106       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:06:33.595312       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-717222 -n default-k8s-diff-port-717222
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-717222 -n default-k8s-diff-port-717222: exit status 2 (324.473226ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-717222 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-717222
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-717222:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59",
	        "Created": "2025-12-19T03:04:47.206515223Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 352308,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:05:53.385310779Z",
	            "FinishedAt": "2025-12-19T03:05:52.262245388Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59/hostname",
	        "HostsPath": "/var/lib/docker/containers/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59/hosts",
	        "LogPath": "/var/lib/docker/containers/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59/f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59-json.log",
	        "Name": "/default-k8s-diff-port-717222",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-717222:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-717222",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f8284300a033937b3a595ad9657fda1df3f38bce1b3694605fa9d17bff6b6a59",
	                "LowerDir": "/var/lib/docker/overlay2/73dc0bcaa024ed1b32e53abc59282bb71441b29d555c8c9ed18118be21650e76-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/73dc0bcaa024ed1b32e53abc59282bb71441b29d555c8c9ed18118be21650e76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/73dc0bcaa024ed1b32e53abc59282bb71441b29d555c8c9ed18118be21650e76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/73dc0bcaa024ed1b32e53abc59282bb71441b29d555c8c9ed18118be21650e76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-717222",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-717222/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-717222",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-717222",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-717222",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9d06f7aea24e94d05365ef4f03fb5f64c6b5272dae79bd49619bd1821269410e",
	            "SandboxKey": "/var/run/docker/netns/9d06f7aea24e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-717222": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "61bece957d17b845e006f35e9e337693d4d396daf2e4f93e70692be3f3288cbb",
	                    "EndpointID": "2c278581ff3b356f6bebafb94e691fc066cab71fa7bdd973be671471a23efca1",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ae:9c:c1:61:6a:e8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-717222",
	                        "f8284300a033"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-717222 -n default-k8s-diff-port-717222
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-717222 -n default-k8s-diff-port-717222: exit status 2 (321.715128ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-717222 logs -n 25
E1219 03:24:48.777100    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-717222 logs -n 25: (1.163804502s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-717222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-717222 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ addons  │ enable dashboard -p embed-certs-805185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-717222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ image   │ old-k8s-version-433330 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p old-k8s-version-433330 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ image   │ no-preload-278042 image list --format=json                                                                                                                                                                                                         │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p no-preload-278042 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ delete  │ -p old-k8s-version-433330                                                                                                                                                                                                                          │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p old-k8s-version-433330                                                                                                                                                                                                                          │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ start   │ -p newest-cni-837172 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p no-preload-278042                                                                                                                                                                                                                               │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p no-preload-278042                                                                                                                                                                                                                               │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ addons  │ enable metrics-server -p newest-cni-837172 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	│ stop    │ -p newest-cni-837172 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ image   │ embed-certs-805185 image list --format=json                                                                                                                                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ pause   │ -p embed-certs-805185 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	│ delete  │ -p embed-certs-805185                                                                                                                                                                                                                              │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ image   │ default-k8s-diff-port-717222 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ pause   │ -p default-k8s-diff-port-717222 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	│ delete  │ -p embed-certs-805185                                                                                                                                                                                                                              │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ addons  │ enable dashboard -p newest-cni-837172 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ start   │ -p newest-cni-837172 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:24:46
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:24:46.186425  380735 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:24:46.186684  380735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:46.186694  380735 out.go:374] Setting ErrFile to fd 2...
	I1219 03:24:46.186711  380735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:46.186932  380735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:24:46.187393  380735 out.go:368] Setting JSON to false
	I1219 03:24:46.188519  380735 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4037,"bootTime":1766110649,"procs":278,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:24:46.188572  380735 start.go:143] virtualization: kvm guest
	I1219 03:24:46.190437  380735 out.go:179] * [newest-cni-837172] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:24:46.191829  380735 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:24:46.191879  380735 notify.go:221] Checking for updates...
	I1219 03:24:46.194410  380735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:24:46.195933  380735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:24:46.197315  380735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:24:46.198516  380735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:24:46.199874  380735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:24:46.201738  380735 config.go:182] Loaded profile config "newest-cni-837172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:46.202513  380735 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:24:46.231628  380735 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:24:46.231787  380735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:46.296408  380735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-19 03:24:46.285802205 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:46.296560  380735 docker.go:319] overlay module found
	I1219 03:24:46.300911  380735 out.go:179] * Using the docker driver based on existing profile
	I1219 03:24:46.302079  380735 start.go:309] selected driver: docker
	I1219 03:24:46.302097  380735 start.go:928] validating driver "docker" against &{Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:46.302197  380735 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:24:46.302844  380735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:46.360231  380735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-19 03:24:46.349155163 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:46.360633  380735 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:24:46.360678  380735 cni.go:84] Creating CNI manager for ""
	I1219 03:24:46.360796  380735 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:24:46.360862  380735 start.go:353] cluster config:
	{Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:46.363384  380735 out.go:179] * Starting "newest-cni-837172" primary control-plane node in "newest-cni-837172" cluster
	I1219 03:24:46.364575  380735 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:24:46.365748  380735 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:24:46.366784  380735 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:24:46.366837  380735 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1219 03:24:46.366858  380735 cache.go:65] Caching tarball of preloaded images
	I1219 03:24:46.366856  380735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:24:46.366954  380735 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:24:46.366968  380735 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1219 03:24:46.367086  380735 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/config.json ...
	I1219 03:24:46.387214  380735 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:24:46.387233  380735 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:24:46.387251  380735 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:24:46.387281  380735 start.go:360] acquireMachinesLock for newest-cni-837172: {Name:mk7db0c3e7d0fad2e8a414af15e136d817446719 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:24:46.387335  380735 start.go:364] duration metric: took 36.004µs to acquireMachinesLock for "newest-cni-837172"
	I1219 03:24:46.387352  380735 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:24:46.387359  380735 fix.go:54] fixHost starting: 
	I1219 03:24:46.387582  380735 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:46.405829  380735 fix.go:112] recreateIfNeeded on newest-cni-837172: state=Stopped err=<nil>
	W1219 03:24:46.405867  380735 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.105809849Z" level=info msg="Created container 35d02beeb2185abe39e28a11af6121432270e8cbb693e170c7e4268ee5b3b270: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq/clear-stale-pid" id=5a645826-349a-438a-8096-df1ef85fa13f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.106574675Z" level=info msg="Starting container: 35d02beeb2185abe39e28a11af6121432270e8cbb693e170c7e4268ee5b3b270" id=cac0719a-4055-47b5-9cd1-db557fa3cd0a name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.108867589Z" level=info msg="Started container" PID=1966 containerID=35d02beeb2185abe39e28a11af6121432270e8cbb693e170c7e4268ee5b3b270 description=kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq/clear-stale-pid id=cac0719a-4055-47b5-9cd1-db557fa3cd0a name=/runtime.v1.RuntimeService/StartContainer sandboxID=8667e85e98055a9a632295b2700a5f7426d8a381a2d0a88f63c9f25413fcbb91
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.244843017Z" level=info msg="Checking image status: kong:3.9" id=0cec8e99-8e10-454e-875b-ea15d4a209cd name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.245030729Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.247083766Z" level=info msg="Checking image status: kong:3.9" id=3f2254f1-a52b-4104-87c2-661e1bd23ec3 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.247306541Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.25336671Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq/proxy" id=94e64d89-1339-4146-8f9c-53f734cb5589 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.253525887Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.260510197Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.261326368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.301363315Z" level=info msg="Created container dd2d524ddac2329c06c87daf6c7c8f6eb8a85bde1ae5d599415346c87bfae650: kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq/proxy" id=94e64d89-1339-4146-8f9c-53f734cb5589 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.30215616Z" level=info msg="Starting container: dd2d524ddac2329c06c87daf6c7c8f6eb8a85bde1ae5d599415346c87bfae650" id=12ee788e-8289-4593-8fc9-302f77b8987b name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:18 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:18.304379149Z" level=info msg="Started container" PID=1977 containerID=dd2d524ddac2329c06c87daf6c7c8f6eb8a85bde1ae5d599415346c87bfae650 description=kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq/proxy id=12ee788e-8289-4593-8fc9-302f77b8987b name=/runtime.v1.RuntimeService/StartContainer sandboxID=8667e85e98055a9a632295b2700a5f7426d8a381a2d0a88f63c9f25413fcbb91
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.293364694Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7a2b6641-2330-4f1c-8ac3-bd5fc486ac9a name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.294343816Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=25406107-20f3-4be8-a6d5-7899eb74be0f name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.295572666Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ee62b1e2-d30f-4a02-8252-3cb5cb6d1371 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.295760296Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.302496713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.302683962Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/79e095d41c86d78a8608ae66d21f2385ceb6046ec971057f76b5fefa76ae5f30/merged/etc/passwd: no such file or directory"
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.302750865Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/79e095d41c86d78a8608ae66d21f2385ceb6046ec971057f76b5fefa76ae5f30/merged/etc/group: no such file or directory"
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.303093477Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.338341513Z" level=info msg="Created container d997c9b36079f5a7a989c5b8afff9f0cb04f9dfcbd3977915ee11b3d32ada9a6: kube-system/storage-provisioner/storage-provisioner" id=ee62b1e2-d30f-4a02-8252-3cb5cb6d1371 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.339046763Z" level=info msg="Starting container: d997c9b36079f5a7a989c5b8afff9f0cb04f9dfcbd3977915ee11b3d32ada9a6" id=43566eeb-3dc8-4778-bebc-e3a50bf31b5d name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:06:34 default-k8s-diff-port-717222 crio[565]: time="2025-12-19T03:06:34.341081965Z" level=info msg="Started container" PID=3395 containerID=d997c9b36079f5a7a989c5b8afff9f0cb04f9dfcbd3977915ee11b3d32ada9a6 description=kube-system/storage-provisioner/storage-provisioner id=43566eeb-3dc8-4778-bebc-e3a50bf31b5d name=/runtime.v1.RuntimeService/StartContainer sandboxID=470b7f13281e4c61793ea7eeab1f00af8c464b75a182af8abe8a9e8fcfc00b9a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	d997c9b36079f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Running             storage-provisioner                    1                   470b7f13281e4       storage-provisioner                                     kube-system
	dd2d524ddac23       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                           18 minutes ago      Running             proxy                                  0                   8667e85e98055       kubernetes-dashboard-kong-9849c64bd-jnmzq               kubernetes-dashboard
	35d02beeb2185       docker.io/library/kong@sha256:73ac10ce4d2c5b3b8b4acd6c8117b4e72d1a201d95be2d51aeae8324d776a108                             18 minutes ago      Exited              clear-stale-pid                        0                   8667e85e98055       kubernetes-dashboard-kong-9849c64bd-jnmzq               kubernetes-dashboard
	5fe7d916a364f       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   18 minutes ago      Running             kubernetes-dashboard-metrics-scraper   0                   8df1f8a8e9b8c       kubernetes-dashboard-metrics-scraper-7685fd8b77-pwmcj   kubernetes-dashboard
	efed0d8824978       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               18 minutes ago      Running             kubernetes-dashboard-web               0                   85c0932639a7f       kubernetes-dashboard-web-5c9f966b98-pmb5t               kubernetes-dashboard
	6e3eff743b9cd       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              18 minutes ago      Running             kubernetes-dashboard-auth              0                   226a7334560d4       kubernetes-dashboard-auth-76bb77b695-58swx              kubernetes-dashboard
	5c21853c28563       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               18 minutes ago      Running             kubernetes-dashboard-api               0                   442cfc6f80155       kubernetes-dashboard-api-6c4454678d-vmnj2               kubernetes-dashboard
	561ec43405227       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                                           18 minutes ago      Running             busybox                                1                   bdce9bd9d632c       busybox                                                 default
	2592b062e7872       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                           18 minutes ago      Running             coredns                                0                   ad0fcb07810bf       coredns-66bc5c9577-dskxl                                kube-system
	dbbb6a255de37       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           18 minutes ago      Exited              storage-provisioner                    0                   470b7f13281e4       storage-provisioner                                     kube-system
	d7b31f6039b4c       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                                           18 minutes ago      Running             kindnet-cni                            0                   42aa8ce5cba75       kindnet-zgcrn                                           kube-system
	cd178b86eed6d       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                           18 minutes ago      Running             kube-proxy                             0                   84cdb0361e2e6       kube-proxy-mr7c8                                        kube-system
	1340a2f59347d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                           18 minutes ago      Running             etcd                                   0                   ccb6ae903ae17       etcd-default-k8s-diff-port-717222                       kube-system
	725faee3812c5       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                           18 minutes ago      Running             kube-scheduler                         0                   2ad392cb5e514       kube-scheduler-default-k8s-diff-port-717222             kube-system
	d2c496c53c696       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                           18 minutes ago      Running             kube-apiserver                         0                   ec833bb6abd84       kube-apiserver-default-k8s-diff-port-717222             kube-system
	0fb4e8910a64f       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                           18 minutes ago      Running             kube-controller-manager                0                   6217f80d4b77a       kube-controller-manager-default-k8s-diff-port-717222    kube-system
	
	
	==> coredns [2592b062e787245c17fcfad40e551290657aea425be5e044174243d7524bc317] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55129 - 16165 "HINFO IN 3453254911344364497.3052208195299777284. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.04385742s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-717222
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-717222
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=default-k8s-diff-port-717222
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_05_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:05:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-717222
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:24:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:24:35 +0000   Fri, 19 Dec 2025 03:04:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:24:35 +0000   Fri, 19 Dec 2025 03:04:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:24:35 +0000   Fri, 19 Dec 2025 03:04:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:24:35 +0000   Fri, 19 Dec 2025 03:05:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-717222
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                301b16dc-31c1-4466-a363-b4e4f9941cd5
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-dskxl                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-default-k8s-diff-port-717222                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kindnet-zgcrn                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-default-k8s-diff-port-717222              250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-717222     200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-mr7c8                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-default-k8s-diff-port-717222              100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        kubernetes-dashboard-api-6c4454678d-vmnj2                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-auth-76bb77b695-58swx               100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-jnmzq                0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-pwmcj    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-pmb5t                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1250m (15%)  1100m (13%)
	  memory             1020Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     19m                kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node default-k8s-diff-port-717222 event: Registered Node default-k8s-diff-port-717222 in Controller
	  Normal  NodeReady                19m                kubelet          Node default-k8s-diff-port-717222 status is now: NodeReady
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-717222 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node default-k8s-diff-port-717222 event: Registered Node default-k8s-diff-port-717222 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [1340a2f59347d94cb67897c14b11276bb491943caa3b94ed7080db25e5d5dc78] <==
	{"level":"warn","ts":"2025-12-19T03:06:06.378732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.407834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.459907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.484810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.498580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.516121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.532033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.548224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.567442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.583249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.608694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:06:06.623918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49614","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T03:16:01.657050Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":990}
	{"level":"info","ts":"2025-12-19T03:16:01.664840Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":990,"took":"7.456943ms","hash":2471762061,"current-db-size-bytes":3915776,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":3915776,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2025-12-19T03:16:01.664919Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2471762061,"revision":990,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:21:01.662543Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1235}
	{"level":"info","ts":"2025-12-19T03:21:01.664987Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1235,"took":"2.122855ms","hash":87961367,"current-db-size-bytes":3915776,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":2101248,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-12-19T03:21:01.665040Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":87961367,"revision":1235,"compact-revision":990}
	{"level":"info","ts":"2025-12-19T03:24:04.616130Z","caller":"traceutil/trace.go:172","msg":"trace[305349054] transaction","detail":"{read_only:false; response_revision:1632; number_of_response:1; }","duration":"142.323928ms","start":"2025-12-19T03:24:04.473787Z","end":"2025-12-19T03:24:04.616111Z","steps":["trace[305349054] 'process raft request'  (duration: 125.885135ms)","trace[305349054] 'compare'  (duration: 16.341168ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:24:04.788967Z","caller":"traceutil/trace.go:172","msg":"trace[736017107] linearizableReadLoop","detail":"{readStateIndex:1877; appliedIndex:1877; }","duration":"171.212825ms","start":"2025-12-19T03:24:04.617731Z","end":"2025-12-19T03:24:04.788944Z","steps":["trace[736017107] 'read index received'  (duration: 171.200561ms)","trace[736017107] 'applied index is now lower than readState.Index'  (duration: 11.453µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:24:04.789391Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"171.646552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" limit:1 ","response":"range_response_count:1 size:420"}
	{"level":"info","ts":"2025-12-19T03:24:04.789481Z","caller":"traceutil/trace.go:172","msg":"trace[1502582395] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:1632; }","duration":"171.772291ms","start":"2025-12-19T03:24:04.617692Z","end":"2025-12-19T03:24:04.789464Z","steps":["trace[1502582395] 'agreement among raft nodes before linearized reading'  (duration: 171.370456ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:24:04.789526Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.513258ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configuration.konghq.com/konglicenses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:24:04.789556Z","caller":"traceutil/trace.go:172","msg":"trace[331089237] range","detail":"{range_begin:/registry/configuration.konghq.com/konglicenses; range_end:; response_count:0; response_revision:1633; }","duration":"107.546037ms","start":"2025-12-19T03:24:04.682002Z","end":"2025-12-19T03:24:04.789548Z","steps":["trace[331089237] 'agreement among raft nodes before linearized reading'  (duration: 107.495727ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:24:04.789623Z","caller":"traceutil/trace.go:172","msg":"trace[872270148] transaction","detail":"{read_only:false; response_revision:1633; number_of_response:1; }","duration":"218.608589ms","start":"2025-12-19T03:24:04.570993Z","end":"2025-12-19T03:24:04.789601Z","steps":["trace[872270148] 'process raft request'  (duration: 218.073061ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:24:49 up  1:07,  0 user,  load average: 2.02, 0.95, 1.25
	Linux default-k8s-diff-port-717222 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d7b31f6039b4c71a1c774e7e89359f49dd4bca0b72f47cce0a7db10b8a4eb339] <==
	I1219 03:22:44.143465       1 main.go:301] handling current node
	I1219 03:22:54.151094       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:22:54.151132       1 main.go:301] handling current node
	I1219 03:23:04.143666       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:23:04.143740       1 main.go:301] handling current node
	I1219 03:23:14.144759       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:23:14.144798       1 main.go:301] handling current node
	I1219 03:23:24.151061       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:23:24.151091       1 main.go:301] handling current node
	I1219 03:23:34.143600       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:23:34.143635       1 main.go:301] handling current node
	I1219 03:23:44.142928       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:23:44.142982       1 main.go:301] handling current node
	I1219 03:23:54.143360       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:23:54.143398       1 main.go:301] handling current node
	I1219 03:24:04.151416       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:24:04.151454       1 main.go:301] handling current node
	I1219 03:24:14.147032       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:24:14.147066       1 main.go:301] handling current node
	I1219 03:24:24.143281       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:24:24.143313       1 main.go:301] handling current node
	I1219 03:24:34.148458       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:24:34.148865       1 main.go:301] handling current node
	I1219 03:24:44.147201       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1219 03:24:44.147235       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d2c496c53c69616e93f161ca59c69ebaa3d4de81e94460893270a29e321886eb] <==
	I1219 03:06:06.068365       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:06:06.073897       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:06:06.084961       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.107.87.247"}
	I1219 03:06:06.089336       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.220.200"}
	I1219 03:06:06.096055       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.107.37.89"}
	I1219 03:06:06.097724       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.106.126.95"}
	I1219 03:06:06.105426       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.103.60.201"}
	I1219 03:06:06.111150       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 03:06:06.111150       1 controller.go:667] quota admission added evaluator for: deployments.apps
	W1219 03:06:06.366398       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.407675       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.460136       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.484666       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.498913       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.516026       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.532002       1 logging.go:55] [core] [Channel #283 SubChannel #284]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.548159       1 logging.go:55] [core] [Channel #287 SubChannel #288]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.564547       1 logging.go:55] [core] [Channel #291 SubChannel #292]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 03:06:06.583215       1 logging.go:55] [core] [Channel #295 SubChannel #296]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1219 03:06:06.599243       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	W1219 03:06:06.606221       1 logging.go:55] [core] [Channel #299 SubChannel #300]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:06:06.623365       1 logging.go:55] [core] [Channel #303 SubChannel #304]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1219 03:06:06.946827       1 controller.go:667] quota admission added evaluator for: endpoints
	I1219 03:06:07.061226       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:16:03.036227       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [0fb4e8910a64fc9fe34f2319ccb58566695768a8c137e01b391e3004481cb992] <==
	I1219 03:06:06.443886       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 03:06:06.448122       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1219 03:06:06.448186       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1219 03:06:06.448203       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1219 03:06:06.448213       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1219 03:06:06.465415       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1219 03:06:06.465574       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1219 03:06:06.465610       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1219 03:06:06.465621       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1219 03:06:06.465629       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1219 03:06:06.469733       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1219 03:06:06.472102       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1219 03:06:06.475316       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1219 03:06:06.478047       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 03:06:06.492013       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1219 03:06:06.492117       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1219 03:06:06.492629       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1219 03:06:06.493189       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1219 03:06:06.493873       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 03:06:07.594172       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:06:07.650019       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:06:07.681489       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:06:07.691740       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:06:07.691828       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 03:06:07.691843       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [cd178b86eed6df4e301822d1cb033cde8457245acc5c1565f60ccb12d47ee2aa] <==
	I1219 03:06:03.629338       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:06:03.701880       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:06:03.802296       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:06:03.802339       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1219 03:06:03.802448       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:06:03.830859       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:06:03.830933       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:06:03.839110       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:06:03.840168       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:06:03.840214       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:06:03.842696       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:06:03.842727       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:06:03.842694       1 config.go:309] "Starting node config controller"
	I1219 03:06:03.842762       1 config.go:200] "Starting service config controller"
	I1219 03:06:03.842769       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:06:03.842768       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:06:03.842972       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:06:03.843007       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:06:03.942900       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:06:03.942899       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:06:03.942907       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:06:03.943205       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [725faee3812c5d3136b764571b2c9b270517680beb590d21de31aad0b5d0b89a] <==
	I1219 03:06:01.472873       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:06:03.026871       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:06:03.026986       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1219 03:06:03.027002       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:06:03.027011       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:06:03.089314       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:06:03.089358       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:06:03.093055       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:06:03.093084       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:06:03.093364       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:06:03.094336       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:06:03.193871       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067872     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lhhh\" (UniqueName: \"kubernetes.io/projected/af7e569e-9279-40a6-aa17-cda231d867a2-kube-api-access-4lhhh\") pod \"kubernetes-dashboard-web-5c9f966b98-pmb5t\" (UID: \"af7e569e-9279-40a6-aa17-cda231d867a2\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-pmb5t"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067900     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmswx\" (UniqueName: \"kubernetes.io/projected/24aef03d-85db-4df3-a193-f13c807f84de-kube-api-access-bmswx\") pod \"kubernetes-dashboard-auth-76bb77b695-58swx\" (UID: \"24aef03d-85db-4df3-a193-f13c807f84de\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-76bb77b695-58swx"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067924     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b73c0a8a-5571-4af3-bd91-2ce3ab1f7b8e-tmp-volume\") pod \"kubernetes-dashboard-api-6c4454678d-vmnj2\" (UID: \"b73c0a8a-5571-4af3-bd91-2ce3ab1f7b8e\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-6c4454678d-vmnj2"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.067959     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/af7e569e-9279-40a6-aa17-cda231d867a2-tmp-volume\") pod \"kubernetes-dashboard-web-5c9f966b98-pmb5t\" (UID: \"af7e569e-9279-40a6-aa17-cda231d867a2\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-pmb5t"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.068002     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/24aef03d-85db-4df3-a193-f13c807f84de-tmp-volume\") pod \"kubernetes-dashboard-auth-76bb77b695-58swx\" (UID: \"24aef03d-85db-4df3-a193-f13c807f84de\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-76bb77b695-58swx"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.068024     727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9f54900a-1ad0-4593-8236-0a1dc1a88e64-tmp-volume\") pod \"kubernetes-dashboard-metrics-scraper-7685fd8b77-pwmcj\" (UID: \"9f54900a-1ad0-4593-8236-0a1dc1a88e64\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-pwmcj"
	Dec 19 03:06:07 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:07.110436     727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 19 03:06:08 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:08.735645     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:08 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:08.735776     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:09 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:09.227142     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-api-6c4454678d-vmnj2" podStartSLOduration=0.849461056 podStartE2EDuration="2.227114712s" podCreationTimestamp="2025-12-19 03:06:07 +0000 UTC" firstStartedPulling="2025-12-19 03:06:07.357732164 +0000 UTC m=+7.304652030" lastFinishedPulling="2025-12-19 03:06:08.735385823 +0000 UTC m=+8.682305686" observedRunningTime="2025-12-19 03:06:09.226299035 +0000 UTC m=+9.173218910" watchObservedRunningTime="2025-12-19 03:06:09.227114712 +0000 UTC m=+9.174034588"
	Dec 19 03:06:10 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:10.419464     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:10 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:10.419559     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:11 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:11.234033     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-auth-76bb77b695-58swx" podStartSLOduration=1.191233274 podStartE2EDuration="4.234006036s" podCreationTimestamp="2025-12-19 03:06:07 +0000 UTC" firstStartedPulling="2025-12-19 03:06:07.376415045 +0000 UTC m=+7.323334914" lastFinishedPulling="2025-12-19 03:06:10.419187817 +0000 UTC m=+10.366107676" observedRunningTime="2025-12-19 03:06:11.233777792 +0000 UTC m=+11.180697668" watchObservedRunningTime="2025-12-19 03:06:11.234006036 +0000 UTC m=+11.180925911"
	Dec 19 03:06:13 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:13.311379     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:13 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:13.311529     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:14 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:14.115193     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:14 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:14.115296     727 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:06:14 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:14.241972     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-pwmcj" podStartSLOduration=0.508150908 podStartE2EDuration="7.241948013s" podCreationTimestamp="2025-12-19 03:06:07 +0000 UTC" firstStartedPulling="2025-12-19 03:06:07.38113198 +0000 UTC m=+7.328051833" lastFinishedPulling="2025-12-19 03:06:14.11492908 +0000 UTC m=+14.061848938" observedRunningTime="2025-12-19 03:06:14.24166888 +0000 UTC m=+14.188588771" watchObservedRunningTime="2025-12-19 03:06:14.241948013 +0000 UTC m=+14.188867888"
	Dec 19 03:06:14 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:14.255081     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-pmb5t" podStartSLOduration=1.322160186 podStartE2EDuration="7.255055586s" podCreationTimestamp="2025-12-19 03:06:07 +0000 UTC" firstStartedPulling="2025-12-19 03:06:07.378248795 +0000 UTC m=+7.325168663" lastFinishedPulling="2025-12-19 03:06:13.311144187 +0000 UTC m=+13.258064063" observedRunningTime="2025-12-19 03:06:14.254652221 +0000 UTC m=+14.201572121" watchObservedRunningTime="2025-12-19 03:06:14.255055586 +0000 UTC m=+14.201975462"
	Dec 19 03:06:19 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:19.265507     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-jnmzq" podStartSLOduration=1.591075171 podStartE2EDuration="12.26547879s" podCreationTimestamp="2025-12-19 03:06:07 +0000 UTC" firstStartedPulling="2025-12-19 03:06:07.391768736 +0000 UTC m=+7.338688592" lastFinishedPulling="2025-12-19 03:06:18.066172352 +0000 UTC m=+18.013092211" observedRunningTime="2025-12-19 03:06:19.265420913 +0000 UTC m=+19.212340789" watchObservedRunningTime="2025-12-19 03:06:19.26547879 +0000 UTC m=+19.212398667"
	Dec 19 03:06:34 default-k8s-diff-port-717222 kubelet[727]: I1219 03:06:34.292974     727 scope.go:117] "RemoveContainer" containerID="dbbb6a255de373670b1e2cdd1093e839a1db9c2f1eff3fcb467e13bda9db456d"
	Dec 19 03:24:44 default-k8s-diff-port-717222 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 19 03:24:44 default-k8s-diff-port-717222 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 19 03:24:44 default-k8s-diff-port-717222 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 19 03:24:44 default-k8s-diff-port-717222 systemd[1]: kubelet.service: Consumed 24.939s CPU time.
	
	
	==> kubernetes-dashboard [5c21853c28563a691ef440986410f18c67ba23dbc122b1d94b9cce6075bdfb75] <==
	I1219 03:06:08.860787       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:06:08.860900       1 init.go:49] Using in-cluster config
	I1219 03:06:08.861145       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:06:08.861164       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:06:08.861172       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:06:08.861177       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:06:08.868063       1 main.go:119] "Successful initial request to the apiserver" version="v1.34.3"
	I1219 03:06:08.868091       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:06:08.944605       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 03:06:08.948604       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	I1219 03:06:38.953964       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [5fe7d916a364f331d8aa2665bfdbeab1fff27316fa0fee64cb7834c35bef418d] <==
	E1219 03:22:14.229684       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:23:14.229789       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	E1219 03:24:14.230171       1 main.go:114] Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)
	10.244.0.1 - - [19/Dec/2025:03:22:09 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:12 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:22 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:32 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:39 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:22:42 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:22:52 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:02 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:09 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:12 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:22 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:32 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:39 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:23:42 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:23:52 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:02 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:09 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:24:12 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:22 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:32 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:24:39 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:24:42 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	
	
	==> kubernetes-dashboard [6e3eff743b9cdb70ef6cbf70a1039d5cff4c8fe2e48d5a15acb23261f2b4507e] <==
	I1219 03:06:10.539923       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:06:10.540000       1 init.go:49] Using in-cluster config
	I1219 03:06:10.540134       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [efed0d882497800414676940b84aa41e026026efe618a2d160430de527d8e1f6] <==
	I1219 03:06:13.510889       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:06:13.510946       1 init.go:48] Using in-cluster config
	I1219 03:06:13.511172       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [d997c9b36079f5a7a989c5b8afff9f0cb04f9dfcbd3977915ee11b3d32ada9a6] <==
	W1219 03:24:23.911319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:25.915018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:25.919611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:27.923189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:27.928429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:29.932430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:29.936267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:31.939085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:31.946026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:33.949633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:33.953689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:35.956529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:35.961490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:37.964279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:37.968416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:39.971880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:39.976073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:41.979852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:41.987361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:43.990726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:43.995018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:45.998460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:46.005824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:48.008946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:24:48.014549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dbbb6a255de373670b1e2cdd1093e839a1db9c2f1eff3fcb467e13bda9db456d] <==
	I1219 03:06:03.592106       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:06:33.595312       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-717222 -n default-k8s-diff-port-717222
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-717222 -n default-k8s-diff-port-717222: exit status 2 (331.448387ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-717222 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-837172 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-837172 --alsologtostderr -v=1: exit status 80 (2.322224962s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-837172 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 03:25:17.852684  386654 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:25:17.852871  386654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:25:17.852888  386654 out.go:374] Setting ErrFile to fd 2...
	I1219 03:25:17.852893  386654 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:25:17.853100  386654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:25:17.853337  386654 out.go:368] Setting JSON to false
	I1219 03:25:17.853359  386654 mustload.go:66] Loading cluster: newest-cni-837172
	I1219 03:25:17.853734  386654 config.go:182] Loaded profile config "newest-cni-837172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:25:17.854139  386654 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:25:17.873393  386654 host.go:66] Checking if "newest-cni-837172" exists ...
	I1219 03:25:17.873965  386654 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:25:17.932461  386654 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-19 03:25:17.922355015 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:25:17.933098  386654 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765965980-22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765965980-22186-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-837172 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1219 03:25:17.934997  386654 out.go:179] * Pausing node newest-cni-837172 ... 
	I1219 03:25:17.936238  386654 host.go:66] Checking if "newest-cni-837172" exists ...
	I1219 03:25:17.936470  386654 ssh_runner.go:195] Run: systemctl --version
	I1219 03:25:17.936503  386654 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:25:17.956123  386654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:25:18.056668  386654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:25:18.069003  386654 pause.go:52] kubelet running: true
	I1219 03:25:18.069080  386654 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1219 03:25:18.243609  386654 cri.go:57] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1219 03:25:18.243752  386654 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1219 03:25:18.310443  386654 cri.go:92] found id: "9ccabcaa81e1b6495f7bf45ad6fa9e2ea8e2e4a121f83d0d47cc7b4febce7387"
	I1219 03:25:18.310471  386654 cri.go:92] found id: "d0c550beeeb6567d0a1c34c8c5920c0f2dd33b9ad81073dd94e7b76aaff5887a"
	I1219 03:25:18.310476  386654 cri.go:92] found id: "ed72659cbbde1ac0d1a3f92bad6e21b22b83b110f25c911c29be464bd98e904e"
	I1219 03:25:18.310479  386654 cri.go:92] found id: "da4194cf3330daac8fb3341a58e3cc51bc2a445831557d5264d994bb830c2072"
	I1219 03:25:18.310482  386654 cri.go:92] found id: "fda13eb9d3da07d79f3fb0741d2e97dea1c1da35c264d9c835de5f407e51329b"
	I1219 03:25:18.310485  386654 cri.go:92] found id: "8eb6c2ea1b67c5f966862ab6d7cb34785fcf89532732134cb85e859b4ed40cd2"
	I1219 03:25:18.310488  386654 cri.go:92] found id: "008ab506a7502be9960429732f171f4e4404010c1d76b19a75fa46ee6f21e968"
	I1219 03:25:18.310491  386654 cri.go:92] found id: "f83a8d18b586d30bed3f7b09ab702d2f91fad51a1abf5039cd003618810c256f"
	I1219 03:25:18.310494  386654 cri.go:92] found id: "4121c422fd7f07082964172b84890a33d2bd911602a06e1d0b279ae6a4604884"
	I1219 03:25:18.310501  386654 cri.go:92] found id: "b25fa25074c8ca610e9a68687a7b15a7b0b8125db15bcaebee79770a1fca59bc"
	I1219 03:25:18.310505  386654 cri.go:92] found id: "7cba6c407f3081b501d70bb4e602843ddf95a6e38ff82f06cae4c64dd5241a53"
	I1219 03:25:18.310510  386654 cri.go:92] found id: "31d0c579886a60b8b9f5ac47b7e2223c4edc5ba9511956f59b46b4e6e8a493b5"
	I1219 03:25:18.310514  386654 cri.go:92] found id: "f35c30abddd500953c627635a6009dfd10e461179831ce083750a776a74432d0"
	I1219 03:25:18.310518  386654 cri.go:92] found id: "e9b169e38837e0e14dc98c97fe6c76f4950aa4f2569e237239b942cffc704403"
	I1219 03:25:18.310523  386654 cri.go:92] found id: ""
	I1219 03:25:18.310566  386654 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 03:25:18.322894  386654 retry.go:31] will retry after 192.211219ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:25:18Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:25:18.515327  386654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:25:18.529125  386654 pause.go:52] kubelet running: false
	I1219 03:25:18.529193  386654 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1219 03:25:18.668568  386654 cri.go:57] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1219 03:25:18.668649  386654 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1219 03:25:18.737078  386654 cri.go:92] found id: "9ccabcaa81e1b6495f7bf45ad6fa9e2ea8e2e4a121f83d0d47cc7b4febce7387"
	I1219 03:25:18.737107  386654 cri.go:92] found id: "d0c550beeeb6567d0a1c34c8c5920c0f2dd33b9ad81073dd94e7b76aaff5887a"
	I1219 03:25:18.737114  386654 cri.go:92] found id: "ed72659cbbde1ac0d1a3f92bad6e21b22b83b110f25c911c29be464bd98e904e"
	I1219 03:25:18.737120  386654 cri.go:92] found id: "da4194cf3330daac8fb3341a58e3cc51bc2a445831557d5264d994bb830c2072"
	I1219 03:25:18.737124  386654 cri.go:92] found id: "fda13eb9d3da07d79f3fb0741d2e97dea1c1da35c264d9c835de5f407e51329b"
	I1219 03:25:18.737129  386654 cri.go:92] found id: "8eb6c2ea1b67c5f966862ab6d7cb34785fcf89532732134cb85e859b4ed40cd2"
	I1219 03:25:18.737134  386654 cri.go:92] found id: "008ab506a7502be9960429732f171f4e4404010c1d76b19a75fa46ee6f21e968"
	I1219 03:25:18.737138  386654 cri.go:92] found id: "f83a8d18b586d30bed3f7b09ab702d2f91fad51a1abf5039cd003618810c256f"
	I1219 03:25:18.737143  386654 cri.go:92] found id: "4121c422fd7f07082964172b84890a33d2bd911602a06e1d0b279ae6a4604884"
	I1219 03:25:18.737151  386654 cri.go:92] found id: "b25fa25074c8ca610e9a68687a7b15a7b0b8125db15bcaebee79770a1fca59bc"
	I1219 03:25:18.737156  386654 cri.go:92] found id: "7cba6c407f3081b501d70bb4e602843ddf95a6e38ff82f06cae4c64dd5241a53"
	I1219 03:25:18.737176  386654 cri.go:92] found id: "31d0c579886a60b8b9f5ac47b7e2223c4edc5ba9511956f59b46b4e6e8a493b5"
	I1219 03:25:18.737184  386654 cri.go:92] found id: "f35c30abddd500953c627635a6009dfd10e461179831ce083750a776a74432d0"
	I1219 03:25:18.737189  386654 cri.go:92] found id: "e9b169e38837e0e14dc98c97fe6c76f4950aa4f2569e237239b942cffc704403"
	I1219 03:25:18.737193  386654 cri.go:92] found id: ""
	I1219 03:25:18.737243  386654 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 03:25:18.749909  386654 retry.go:31] will retry after 393.628432ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:25:18Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:25:19.144572  386654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:25:19.157778  386654 pause.go:52] kubelet running: false
	I1219 03:25:19.157859  386654 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1219 03:25:19.305366  386654 cri.go:57] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1219 03:25:19.305451  386654 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1219 03:25:19.372437  386654 cri.go:92] found id: "9ccabcaa81e1b6495f7bf45ad6fa9e2ea8e2e4a121f83d0d47cc7b4febce7387"
	I1219 03:25:19.372461  386654 cri.go:92] found id: "d0c550beeeb6567d0a1c34c8c5920c0f2dd33b9ad81073dd94e7b76aaff5887a"
	I1219 03:25:19.372465  386654 cri.go:92] found id: "ed72659cbbde1ac0d1a3f92bad6e21b22b83b110f25c911c29be464bd98e904e"
	I1219 03:25:19.372470  386654 cri.go:92] found id: "da4194cf3330daac8fb3341a58e3cc51bc2a445831557d5264d994bb830c2072"
	I1219 03:25:19.372473  386654 cri.go:92] found id: "fda13eb9d3da07d79f3fb0741d2e97dea1c1da35c264d9c835de5f407e51329b"
	I1219 03:25:19.372476  386654 cri.go:92] found id: "8eb6c2ea1b67c5f966862ab6d7cb34785fcf89532732134cb85e859b4ed40cd2"
	I1219 03:25:19.372479  386654 cri.go:92] found id: "008ab506a7502be9960429732f171f4e4404010c1d76b19a75fa46ee6f21e968"
	I1219 03:25:19.372482  386654 cri.go:92] found id: "f83a8d18b586d30bed3f7b09ab702d2f91fad51a1abf5039cd003618810c256f"
	I1219 03:25:19.372485  386654 cri.go:92] found id: "4121c422fd7f07082964172b84890a33d2bd911602a06e1d0b279ae6a4604884"
	I1219 03:25:19.372495  386654 cri.go:92] found id: "b25fa25074c8ca610e9a68687a7b15a7b0b8125db15bcaebee79770a1fca59bc"
	I1219 03:25:19.372500  386654 cri.go:92] found id: "7cba6c407f3081b501d70bb4e602843ddf95a6e38ff82f06cae4c64dd5241a53"
	I1219 03:25:19.372506  386654 cri.go:92] found id: "31d0c579886a60b8b9f5ac47b7e2223c4edc5ba9511956f59b46b4e6e8a493b5"
	I1219 03:25:19.372523  386654 cri.go:92] found id: "f35c30abddd500953c627635a6009dfd10e461179831ce083750a776a74432d0"
	I1219 03:25:19.372530  386654 cri.go:92] found id: "e9b169e38837e0e14dc98c97fe6c76f4950aa4f2569e237239b942cffc704403"
	I1219 03:25:19.372535  386654 cri.go:92] found id: ""
	I1219 03:25:19.372581  386654 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 03:25:19.384445  386654 retry.go:31] will retry after 488.571821ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:25:19Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:25:19.874216  386654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:25:19.887366  386654 pause.go:52] kubelet running: false
	I1219 03:25:19.887436  386654 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1219 03:25:20.027554  386654 cri.go:57] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1219 03:25:20.027658  386654 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1219 03:25:20.092088  386654 cri.go:92] found id: "9ccabcaa81e1b6495f7bf45ad6fa9e2ea8e2e4a121f83d0d47cc7b4febce7387"
	I1219 03:25:20.092113  386654 cri.go:92] found id: "d0c550beeeb6567d0a1c34c8c5920c0f2dd33b9ad81073dd94e7b76aaff5887a"
	I1219 03:25:20.092119  386654 cri.go:92] found id: "ed72659cbbde1ac0d1a3f92bad6e21b22b83b110f25c911c29be464bd98e904e"
	I1219 03:25:20.092124  386654 cri.go:92] found id: "da4194cf3330daac8fb3341a58e3cc51bc2a445831557d5264d994bb830c2072"
	I1219 03:25:20.092128  386654 cri.go:92] found id: "fda13eb9d3da07d79f3fb0741d2e97dea1c1da35c264d9c835de5f407e51329b"
	I1219 03:25:20.092132  386654 cri.go:92] found id: "8eb6c2ea1b67c5f966862ab6d7cb34785fcf89532732134cb85e859b4ed40cd2"
	I1219 03:25:20.092135  386654 cri.go:92] found id: "008ab506a7502be9960429732f171f4e4404010c1d76b19a75fa46ee6f21e968"
	I1219 03:25:20.092139  386654 cri.go:92] found id: "f83a8d18b586d30bed3f7b09ab702d2f91fad51a1abf5039cd003618810c256f"
	I1219 03:25:20.092143  386654 cri.go:92] found id: "4121c422fd7f07082964172b84890a33d2bd911602a06e1d0b279ae6a4604884"
	I1219 03:25:20.092170  386654 cri.go:92] found id: "b25fa25074c8ca610e9a68687a7b15a7b0b8125db15bcaebee79770a1fca59bc"
	I1219 03:25:20.092177  386654 cri.go:92] found id: "7cba6c407f3081b501d70bb4e602843ddf95a6e38ff82f06cae4c64dd5241a53"
	I1219 03:25:20.092182  386654 cri.go:92] found id: "31d0c579886a60b8b9f5ac47b7e2223c4edc5ba9511956f59b46b4e6e8a493b5"
	I1219 03:25:20.092187  386654 cri.go:92] found id: "f35c30abddd500953c627635a6009dfd10e461179831ce083750a776a74432d0"
	I1219 03:25:20.092195  386654 cri.go:92] found id: "e9b169e38837e0e14dc98c97fe6c76f4950aa4f2569e237239b942cffc704403"
	I1219 03:25:20.092204  386654 cri.go:92] found id: ""
	I1219 03:25:20.092254  386654 ssh_runner.go:195] Run: sudo runc list -f json
	I1219 03:25:20.107489  386654 out.go:203] 
	W1219 03:25:20.108801  386654 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:25:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:25:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1219 03:25:20.108818  386654 out.go:285] * 
	* 
	W1219 03:25:20.112847  386654 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 03:25:20.114390  386654 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-837172 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-837172
helpers_test.go:244: (dbg) docker inspect newest-cni-837172:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "351fe078c7b374b02eac438980bbfd60e0b4b9abb1fae5a6897f5bc143480d83",
	        "Created": "2025-12-19T03:24:05.774434179Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 381052,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:24:46.434199163Z",
	            "FinishedAt": "2025-12-19T03:24:45.196372534Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/351fe078c7b374b02eac438980bbfd60e0b4b9abb1fae5a6897f5bc143480d83/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/351fe078c7b374b02eac438980bbfd60e0b4b9abb1fae5a6897f5bc143480d83/hostname",
	        "HostsPath": "/var/lib/docker/containers/351fe078c7b374b02eac438980bbfd60e0b4b9abb1fae5a6897f5bc143480d83/hosts",
	        "LogPath": "/var/lib/docker/containers/351fe078c7b374b02eac438980bbfd60e0b4b9abb1fae5a6897f5bc143480d83/351fe078c7b374b02eac438980bbfd60e0b4b9abb1fae5a6897f5bc143480d83-json.log",
	        "Name": "/newest-cni-837172",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-837172:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-837172",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "351fe078c7b374b02eac438980bbfd60e0b4b9abb1fae5a6897f5bc143480d83",
	                "LowerDir": "/var/lib/docker/overlay2/41fe36c0fb6be8853bfc37ac0758cc1fc327dec5ab6c2b0699f7042493b2612f-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/41fe36c0fb6be8853bfc37ac0758cc1fc327dec5ab6c2b0699f7042493b2612f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/41fe36c0fb6be8853bfc37ac0758cc1fc327dec5ab6c2b0699f7042493b2612f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/41fe36c0fb6be8853bfc37ac0758cc1fc327dec5ab6c2b0699f7042493b2612f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-837172",
	                "Source": "/var/lib/docker/volumes/newest-cni-837172/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-837172",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-837172",
	                "name.minikube.sigs.k8s.io": "newest-cni-837172",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "66035efdff628f49fa69a1f0aeec65519fe4987d90b38ba7f18b9aef25aebc5c",
	            "SandboxKey": "/var/run/docker/netns/66035efdff62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-837172": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "031824ca2cfc364eb4fab915cefaa7a9d15393eeb43e3a28ecfa7e5605c16dd1",
	                    "EndpointID": "405696e6393191148b9f7f05ba6247a0756e4c86e23bcdfc02ec7737ea0dce9f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "d6:31:d4:6e:c7:36",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-837172",
	                        "351fe078c7b3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-837172 -n newest-cni-837172
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-837172 -n newest-cni-837172: exit status 2 (321.877802ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-837172 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-837172 logs -n 25: (1.373269392s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-717222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ image   │ old-k8s-version-433330 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p old-k8s-version-433330 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ image   │ no-preload-278042 image list --format=json                                                                                                                                                                                                         │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p no-preload-278042 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ delete  │ -p old-k8s-version-433330                                                                                                                                                                                                                          │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p old-k8s-version-433330                                                                                                                                                                                                                          │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ start   │ -p newest-cni-837172 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p no-preload-278042                                                                                                                                                                                                                               │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p no-preload-278042                                                                                                                                                                                                                               │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ addons  │ enable metrics-server -p newest-cni-837172 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	│ stop    │ -p newest-cni-837172 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ image   │ embed-certs-805185 image list --format=json                                                                                                                                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ pause   │ -p embed-certs-805185 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	│ delete  │ -p embed-certs-805185                                                                                                                                                                                                                              │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ image   │ default-k8s-diff-port-717222 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ pause   │ -p default-k8s-diff-port-717222 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	│ delete  │ -p embed-certs-805185                                                                                                                                                                                                                              │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ addons  │ enable dashboard -p newest-cni-837172 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ start   │ -p newest-cni-837172 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:25 UTC │
	│ delete  │ -p default-k8s-diff-port-717222                                                                                                                                                                                                                    │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p default-k8s-diff-port-717222                                                                                                                                                                                                                    │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ image   │ newest-cni-837172 image list --format=json                                                                                                                                                                                                         │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:25 UTC │ 19 Dec 25 03:25 UTC │
	│ pause   │ -p newest-cni-837172 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:25 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:24:46
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:24:46.186425  380735 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:24:46.186684  380735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:46.186694  380735 out.go:374] Setting ErrFile to fd 2...
	I1219 03:24:46.186711  380735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:46.186932  380735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:24:46.187393  380735 out.go:368] Setting JSON to false
	I1219 03:24:46.188519  380735 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4037,"bootTime":1766110649,"procs":278,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:24:46.188572  380735 start.go:143] virtualization: kvm guest
	I1219 03:24:46.190437  380735 out.go:179] * [newest-cni-837172] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:24:46.191829  380735 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:24:46.191879  380735 notify.go:221] Checking for updates...
	I1219 03:24:46.194410  380735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:24:46.195933  380735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:24:46.197315  380735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:24:46.198516  380735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:24:46.199874  380735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:24:46.201738  380735 config.go:182] Loaded profile config "newest-cni-837172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:46.202513  380735 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:24:46.231628  380735 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:24:46.231787  380735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:46.296408  380735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-19 03:24:46.285802205 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:46.296560  380735 docker.go:319] overlay module found
	I1219 03:24:46.300911  380735 out.go:179] * Using the docker driver based on existing profile
	I1219 03:24:46.302079  380735 start.go:309] selected driver: docker
	I1219 03:24:46.302097  380735 start.go:928] validating driver "docker" against &{Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:46.302197  380735 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:24:46.302844  380735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:46.360231  380735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-19 03:24:46.349155163 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:46.360633  380735 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:24:46.360678  380735 cni.go:84] Creating CNI manager for ""
	I1219 03:24:46.360796  380735 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:24:46.360862  380735 start.go:353] cluster config:
	{Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:46.363384  380735 out.go:179] * Starting "newest-cni-837172" primary control-plane node in "newest-cni-837172" cluster
	I1219 03:24:46.364575  380735 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:24:46.365748  380735 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:24:46.366784  380735 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:24:46.366837  380735 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1219 03:24:46.366858  380735 cache.go:65] Caching tarball of preloaded images
	I1219 03:24:46.366856  380735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:24:46.366954  380735 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:24:46.366968  380735 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1219 03:24:46.367086  380735 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/config.json ...
	I1219 03:24:46.387214  380735 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:24:46.387233  380735 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:24:46.387251  380735 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:24:46.387281  380735 start.go:360] acquireMachinesLock for newest-cni-837172: {Name:mk7db0c3e7d0fad2e8a414af15e136d817446719 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:24:46.387335  380735 start.go:364] duration metric: took 36.004µs to acquireMachinesLock for "newest-cni-837172"
	I1219 03:24:46.387352  380735 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:24:46.387359  380735 fix.go:54] fixHost starting: 
	I1219 03:24:46.387582  380735 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:46.405829  380735 fix.go:112] recreateIfNeeded on newest-cni-837172: state=Stopped err=<nil>
	W1219 03:24:46.405867  380735 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:24:46.407595  380735 out.go:252] * Restarting existing docker container for "newest-cni-837172" ...
	I1219 03:24:46.407668  380735 cli_runner.go:164] Run: docker start newest-cni-837172
	I1219 03:24:46.661365  380735 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:46.683720  380735 kic.go:430] container "newest-cni-837172" state is running.
	I1219 03:24:46.684145  380735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-837172
	I1219 03:24:46.704487  380735 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/config.json ...
	I1219 03:24:46.704785  380735 machine.go:94] provisionDockerMachine start ...
	I1219 03:24:46.704878  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:46.724642  380735 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:46.724907  380735 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1219 03:24:46.724924  380735 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:24:46.725609  380735 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38528->127.0.0.1:33143: read: connection reset by peer
	I1219 03:24:49.870786  380735 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-837172
	
	I1219 03:24:49.870812  380735 ubuntu.go:182] provisioning hostname "newest-cni-837172"
	I1219 03:24:49.870871  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:49.891419  380735 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:49.891637  380735 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1219 03:24:49.891650  380735 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-837172 && echo "newest-cni-837172" | sudo tee /etc/hostname
	I1219 03:24:50.049680  380735 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-837172
	
	I1219 03:24:50.049784  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:50.069449  380735 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:50.069652  380735 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1219 03:24:50.069668  380735 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-837172' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-837172/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-837172' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:24:50.218589  380735 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:24:50.218622  380735 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 03:24:50.218647  380735 ubuntu.go:190] setting up certificates
	I1219 03:24:50.218656  380735 provision.go:84] configureAuth start
	I1219 03:24:50.218714  380735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-837172
	I1219 03:24:50.237682  380735 provision.go:143] copyHostCerts
	I1219 03:24:50.237775  380735 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem, removing ...
	I1219 03:24:50.237799  380735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem
	I1219 03:24:50.237904  380735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 03:24:50.238051  380735 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem, removing ...
	I1219 03:24:50.238064  380735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem
	I1219 03:24:50.238108  380735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 03:24:50.238226  380735 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem, removing ...
	I1219 03:24:50.238237  380735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem
	I1219 03:24:50.238274  380735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 03:24:50.238365  380735 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.newest-cni-837172 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-837172]
	I1219 03:24:50.349862  380735 provision.go:177] copyRemoteCerts
	I1219 03:24:50.349937  380735 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:24:50.349991  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:50.369020  380735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:50.471178  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:24:50.489741  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1219 03:24:50.507980  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 03:24:50.527578  380735 provision.go:87] duration metric: took 308.910313ms to configureAuth
	I1219 03:24:50.527603  380735 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:24:50.527876  380735 config.go:182] Loaded profile config "newest-cni-837172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:50.527980  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:50.555132  380735 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:50.555432  380735 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1219 03:24:50.555457  380735 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:24:50.863906  380735 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:24:50.863934  380735 machine.go:97] duration metric: took 4.159132031s to provisionDockerMachine
	I1219 03:24:50.863949  380735 start.go:293] postStartSetup for "newest-cni-837172" (driver="docker")
	I1219 03:24:50.863963  380735 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:24:50.864035  380735 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:24:50.864071  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:50.882427  380735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:50.984259  380735 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:24:50.987947  380735 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:24:50.987977  380735 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:24:50.987988  380735 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 03:24:50.988034  380735 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 03:24:50.988109  380735 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem -> 85362.pem in /etc/ssl/certs
	I1219 03:24:50.988213  380735 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:24:50.996062  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:24:51.013216  380735 start.go:296] duration metric: took 149.250887ms for postStartSetup
	I1219 03:24:51.013297  380735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:24:51.013347  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:51.031896  380735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:51.129977  380735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:24:51.134327  380735 fix.go:56] duration metric: took 4.746960414s for fixHost
	I1219 03:24:51.134354  380735 start.go:83] releasing machines lock for "newest-cni-837172", held for 4.747008362s
	I1219 03:24:51.134413  380735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-837172
	I1219 03:24:51.152237  380735 ssh_runner.go:195] Run: cat /version.json
	I1219 03:24:51.152280  380735 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:24:51.152347  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:51.152286  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:51.171423  380735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:51.171779  380735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:51.322325  380735 ssh_runner.go:195] Run: systemctl --version
	I1219 03:24:51.328977  380735 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:24:51.363204  380735 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:24:51.367982  380735 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:24:51.368049  380735 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:24:51.376326  380735 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:24:51.376351  380735 start.go:496] detecting cgroup driver to use...
	I1219 03:24:51.376382  380735 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:24:51.376426  380735 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:24:51.391379  380735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:24:51.403663  380735 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:24:51.403742  380735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:24:51.418272  380735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:24:51.431003  380735 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:24:51.510779  380735 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:24:51.597970  380735 docker.go:234] disabling docker service ...
	I1219 03:24:51.598031  380735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:24:51.617644  380735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:24:51.630874  380735 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:24:51.723247  380735 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:24:51.802517  380735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:24:51.815173  380735 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:24:51.830786  380735 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:24:51.830850  380735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:51.839797  380735 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 03:24:51.839858  380735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:51.848536  380735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:51.857048  380735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:51.865481  380735 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:24:51.873127  380735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:51.881668  380735 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:51.889927  380735 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:51.898295  380735 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:24:51.905303  380735 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:24:51.912468  380735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:51.998351  380735 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:24:52.181886  380735 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:24:52.181963  380735 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:24:52.186208  380735 start.go:564] Will wait 60s for crictl version
	I1219 03:24:52.186277  380735 ssh_runner.go:195] Run: which crictl
	I1219 03:24:52.189939  380735 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:24:52.215158  380735 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 03:24:52.215276  380735 ssh_runner.go:195] Run: crio --version
	I1219 03:24:52.246525  380735 ssh_runner.go:195] Run: crio --version
	I1219 03:24:52.281981  380735 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1219 03:24:52.284023  380735 cli_runner.go:164] Run: docker network inspect newest-cni-837172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:24:52.302369  380735 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1219 03:24:52.306388  380735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:24:52.320128  380735 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1219 03:24:52.322178  380735 kubeadm.go:884] updating cluster {Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:24:52.322348  380735 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:24:52.322414  380735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:24:52.358006  380735 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:24:52.358032  380735 crio.go:433] Images already preloaded, skipping extraction
	I1219 03:24:52.358082  380735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:24:52.389789  380735 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:24:52.389814  380735 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:24:52.389824  380735 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1219 03:24:52.389944  380735 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-837172 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:24:52.390038  380735 ssh_runner.go:195] Run: crio config
	I1219 03:24:52.436179  380735 cni.go:84] Creating CNI manager for ""
	I1219 03:24:52.436200  380735 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:24:52.436213  380735 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1219 03:24:52.436234  380735 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-837172 NodeName:newest-cni-837172 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:24:52.436372  380735 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-837172"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:24:52.436432  380735 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1219 03:24:52.445103  380735 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:24:52.445182  380735 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:24:52.452815  380735 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1219 03:24:52.465273  380735 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1219 03:24:52.477483  380735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1219 03:24:52.489732  380735 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:24:52.493252  380735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:24:52.503036  380735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:52.579766  380735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:24:52.609535  380735 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172 for IP: 192.168.76.2
	I1219 03:24:52.609561  380735 certs.go:195] generating shared ca certs ...
	I1219 03:24:52.609582  380735 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:52.609757  380735 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 03:24:52.609813  380735 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 03:24:52.609828  380735 certs.go:257] generating profile certs ...
	I1219 03:24:52.609931  380735 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.key
	I1219 03:24:52.609994  380735 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b
	I1219 03:24:52.610057  380735 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key
	I1219 03:24:52.610193  380735 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem (1338 bytes)
	W1219 03:24:52.610238  380735 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536_empty.pem, impossibly tiny 0 bytes
	I1219 03:24:52.610253  380735 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:24:52.610293  380735 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:24:52.610325  380735 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:24:52.610365  380735 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 03:24:52.610416  380735 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:24:52.611174  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:24:52.630297  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:24:52.649645  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:24:52.668714  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 03:24:52.693504  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1219 03:24:52.714057  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:24:52.732890  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:24:52.750858  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:24:52.769121  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:24:52.787484  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem --> /usr/share/ca-certificates/8536.pem (1338 bytes)
	I1219 03:24:52.805878  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /usr/share/ca-certificates/85362.pem (1708 bytes)
	I1219 03:24:52.843366  380735 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:24:52.855775  380735 ssh_runner.go:195] Run: openssl version
	I1219 03:24:52.862232  380735 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:52.870154  380735 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:24:52.878020  380735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:52.882241  380735 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:52.882309  380735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:52.919049  380735 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:24:52.928111  380735 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8536.pem
	I1219 03:24:52.936360  380735 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8536.pem /etc/ssl/certs/8536.pem
	I1219 03:24:52.946014  380735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8536.pem
	I1219 03:24:52.950736  380735 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:33 /usr/share/ca-certificates/8536.pem
	I1219 03:24:52.950797  380735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8536.pem
	I1219 03:24:52.987167  380735 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:24:52.996543  380735 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85362.pem
	I1219 03:24:53.005225  380735 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85362.pem /etc/ssl/certs/85362.pem
	I1219 03:24:53.013457  380735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85362.pem
	I1219 03:24:53.017511  380735 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:33 /usr/share/ca-certificates/85362.pem
	I1219 03:24:53.017575  380735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85362.pem
	I1219 03:24:53.053591  380735 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:24:53.061829  380735 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:24:53.065839  380735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:24:53.104722  380735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:24:53.147357  380735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:24:53.196987  380735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:24:53.248227  380735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:24:53.296410  380735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:24:53.348441  380735 kubeadm.go:401] StartCluster: {Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:53.348565  380735 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:24:53.348641  380735 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:24:53.380737  380735 cri.go:92] found id: "fda13eb9d3da07d79f3fb0741d2e97dea1c1da35c264d9c835de5f407e51329b"
	I1219 03:24:53.380764  380735 cri.go:92] found id: "8eb6c2ea1b67c5f966862ab6d7cb34785fcf89532732134cb85e859b4ed40cd2"
	I1219 03:24:53.380770  380735 cri.go:92] found id: "008ab506a7502be9960429732f171f4e4404010c1d76b19a75fa46ee6f21e968"
	I1219 03:24:53.380775  380735 cri.go:92] found id: "f83a8d18b586d30bed3f7b09ab702d2f91fad51a1abf5039cd003618810c256f"
	I1219 03:24:53.380780  380735 cri.go:92] found id: ""
	I1219 03:24:53.380835  380735 ssh_runner.go:195] Run: sudo runc list -f json
	W1219 03:24:53.394328  380735 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:24:53Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:24:53.394397  380735 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:24:53.403160  380735 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:24:53.403181  380735 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:24:53.403239  380735 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:24:53.411263  380735 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:24:53.411647  380735 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-837172" does not appear in /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:24:53.411788  380735 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-4987/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-837172" cluster setting kubeconfig missing "newest-cni-837172" context setting]
	I1219 03:24:53.412075  380735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:53.413388  380735 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:24:53.421111  380735 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1219 03:24:53.421154  380735 kubeadm.go:602] duration metric: took 17.965967ms to restartPrimaryControlPlane
	I1219 03:24:53.421173  380735 kubeadm.go:403] duration metric: took 72.742653ms to StartCluster
	I1219 03:24:53.421203  380735 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:53.421283  380735 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:24:53.421954  380735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:53.422207  380735 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:24:53.422282  380735 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:24:53.422378  380735 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-837172"
	I1219 03:24:53.422400  380735 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-837172"
	W1219 03:24:53.422409  380735 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:24:53.422411  380735 addons.go:70] Setting dashboard=true in profile "newest-cni-837172"
	I1219 03:24:53.422416  380735 addons.go:70] Setting default-storageclass=true in profile "newest-cni-837172"
	I1219 03:24:53.422437  380735 host.go:66] Checking if "newest-cni-837172" exists ...
	I1219 03:24:53.422440  380735 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-837172"
	I1219 03:24:53.422476  380735 config.go:182] Loaded profile config "newest-cni-837172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:53.422438  380735 addons.go:239] Setting addon dashboard=true in "newest-cni-837172"
	W1219 03:24:53.422553  380735 addons.go:248] addon dashboard should already be in state true
	I1219 03:24:53.422582  380735 host.go:66] Checking if "newest-cni-837172" exists ...
	I1219 03:24:53.422787  380735 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:53.422943  380735 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:53.423132  380735 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:53.425001  380735 out.go:179] * Verifying Kubernetes components...
	I1219 03:24:53.426512  380735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:53.447055  380735 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:24:53.447081  380735 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:24:53.447143  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:53.447655  380735 addons.go:239] Setting addon default-storageclass=true in "newest-cni-837172"
	W1219 03:24:53.447677  380735 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:24:53.447726  380735 host.go:66] Checking if "newest-cni-837172" exists ...
	I1219 03:24:53.448165  380735 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:53.448995  380735 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:24:53.450164  380735 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:24:53.450182  380735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:24:53.450233  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:53.478441  380735 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:24:53.478467  380735 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:24:53.478528  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:53.485293  380735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:53.485316  380735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:53.503612  380735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:53.577150  380735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:24:53.593619  380735 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:24:53.593684  380735 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:24:53.597901  380735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:24:53.600449  380735 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:24:53.608353  380735 api_server.go:72] duration metric: took 186.113057ms to wait for apiserver process to appear ...
	I1219 03:24:53.608386  380735 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:24:53.608408  380735 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:24:53.615591  380735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:24:54.672548  380735 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:24:54.672580  380735 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:24:54.672596  380735 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:24:54.694184  380735 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:24:54.694225  380735 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:24:55.108674  380735 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:24:55.113983  380735 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:24:55.114012  380735 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:24:55.156905  380735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.558972243s)
	I1219 03:24:55.156960  380735 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.556481995s)
	I1219 03:24:55.157040  380735 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:24:55.157067  380735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.541448182s)
	I1219 03:24:55.162193  380735 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:24:55.608534  380735 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:24:55.613492  380735 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:24:55.613529  380735 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:24:56.068569  380735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:24:56.109036  380735 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:24:56.114869  380735 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1219 03:24:56.115957  380735 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 03:24:56.115986  380735 api_server.go:131] duration metric: took 2.5075929s to wait for apiserver health ...
	I1219 03:24:56.116002  380735 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:24:56.120146  380735 system_pods.go:59] 8 kube-system pods found
	I1219 03:24:56.120185  380735 system_pods.go:61] "coredns-7d764666f9-ckc9j" [5bc3e758-2623-4eae-87fe-a58b932c9e87] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1219 03:24:56.120197  380735 system_pods.go:61] "etcd-newest-cni-837172" [59f28fae-3605-487b-a1b8-c3851c47abac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:24:56.120208  380735 system_pods.go:61] "kindnet-846n4" [b45c7fbd-085c-4972-b312-0973aab68ddc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:24:56.120218  380735 system_pods.go:61] "kube-apiserver-newest-cni-837172" [8d92900e-716d-42ad-9d88-1ca6d0ddf5c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:24:56.120227  380735 system_pods.go:61] "kube-controller-manager-newest-cni-837172" [46b3ad5a-64d1-4e1f-8bdf-ce613dcd6348] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:24:56.120239  380735 system_pods.go:61] "kube-proxy-6wg2n" [356cd689-df37-49ac-a3f2-1931978ccf64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:24:56.120247  380735 system_pods.go:61] "kube-scheduler-newest-cni-837172" [da065d09-cc65-42e7-8e0d-9f9709cafaf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:24:56.120253  380735 system_pods.go:61] "storage-provisioner" [ba402c27-5828-489f-a656-bc0ef2e8f05e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1219 03:24:56.120265  380735 system_pods.go:74] duration metric: took 4.256236ms to wait for pod list to return data ...
	I1219 03:24:56.120278  380735 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:24:56.123330  380735 default_sa.go:45] found service account: "default"
	I1219 03:24:56.123351  380735 default_sa.go:55] duration metric: took 3.06783ms for default service account to be created ...
	I1219 03:24:56.123362  380735 kubeadm.go:587] duration metric: took 2.701129347s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:24:56.123383  380735 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:24:56.126040  380735 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:24:56.126068  380735 node_conditions.go:123] node cpu capacity is 8
	I1219 03:24:56.126085  380735 node_conditions.go:105] duration metric: took 2.695124ms to run NodePressure ...
	I1219 03:24:56.126099  380735 start.go:242] waiting for startup goroutines ...
	I1219 03:24:58.951395  380735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (2.882776783s)
	I1219 03:24:58.951497  380735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:24:59.119677  380735 addons.go:500] Verifying addon dashboard=true in "newest-cni-837172"
	I1219 03:24:59.119984  380735 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:59.139581  380735 out.go:179] * Verifying dashboard addon...
	I1219 03:24:59.141351  380735 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:24:59.144824  380735 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:24:59.144839  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:59.644196  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:00.144643  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:00.645028  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:01.144630  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:01.645089  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:02.144961  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:02.644679  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:03.145333  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:03.644986  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:04.144758  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:04.645494  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:05.145229  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:05.644918  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:06.145818  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:06.644221  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:07.145109  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:07.645012  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:08.146248  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:08.645246  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:09.145504  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:09.644904  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:10.145240  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:10.645473  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:11.145036  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:11.644615  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:12.145741  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:12.645028  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:13.145328  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:13.644988  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:14.145412  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:14.644866  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:15.145841  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:15.645279  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:16.145389  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:16.645890  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:17.144951  380735 kapi.go:107] duration metric: took 18.00359908s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:25:17.147090  380735 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-837172 addons enable metrics-server
	
	I1219 03:25:17.148352  380735 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1219 03:25:17.149617  380735 addons.go:546] duration metric: took 23.727342758s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:25:17.149650  380735 start.go:247] waiting for cluster config update ...
	I1219 03:25:17.149660  380735 start.go:256] writing updated cluster config ...
	I1219 03:25:17.149904  380735 ssh_runner.go:195] Run: rm -f paused
	I1219 03:25:17.210395  380735 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1219 03:25:17.212158  380735 out.go:179] * Done! kubectl is now configured to use "newest-cni-837172" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 03:25:13 newest-cni-837172 crio[519]: time="2025-12-19T03:25:13.774969101Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:25:13 newest-cni-837172 crio[519]: time="2025-12-19T03:25:13.804423367Z" level=info msg="Created container 7cba6c407f3081b501d70bb4e602843ddf95a6e38ff82f06cae4c64dd5241a53: kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-25khp/clear-stale-pid" id=f70de915-f2ed-4b2c-9d47-ca535726c0d9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:25:13 newest-cni-837172 crio[519]: time="2025-12-19T03:25:13.805106814Z" level=info msg="Starting container: 7cba6c407f3081b501d70bb4e602843ddf95a6e38ff82f06cae4c64dd5241a53" id=cbfede47-048f-4203-bde1-8919e8d1f6c9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:25:13 newest-cni-837172 crio[519]: time="2025-12-19T03:25:13.807492463Z" level=info msg="Started container" PID=1734 containerID=7cba6c407f3081b501d70bb4e602843ddf95a6e38ff82f06cae4c64dd5241a53 description=kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-25khp/clear-stale-pid id=cbfede47-048f-4203-bde1-8919e8d1f6c9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c9dc504c9c3f255b0364c23e49ab87a1d5469de1fc1a0367ad8a2b6d41e97494
	Dec 19 03:25:14 newest-cni-837172 crio[519]: time="2025-12-19T03:25:14.784262728Z" level=info msg="Checking image status: kong:3.9" id=bc9a5507-b8a4-4dad-9040-626db4d2673a name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:25:14 newest-cni-837172 crio[519]: time="2025-12-19T03:25:14.7844356Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 03:25:14 newest-cni-837172 crio[519]: time="2025-12-19T03:25:14.786577546Z" level=info msg="Checking image status: kong:3.9" id=db708e62-f4db-4a59-ad9c-118046a48b4d name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:25:14 newest-cni-837172 crio[519]: time="2025-12-19T03:25:14.786750705Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 03:25:14 newest-cni-837172 crio[519]: time="2025-12-19T03:25:14.790785279Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-25khp/proxy" id=c8834068-3236-4b8c-bb08-e043c74b0baf name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:25:14 newest-cni-837172 crio[519]: time="2025-12-19T03:25:14.790919262Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.08682841Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.087585958Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.117557795Z" level=info msg="Created container b25fa25074c8ca610e9a68687a7b15a7b0b8125db15bcaebee79770a1fca59bc: kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-25khp/proxy" id=c8834068-3236-4b8c-bb08-e043c74b0baf name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.118243854Z" level=info msg="Starting container: b25fa25074c8ca610e9a68687a7b15a7b0b8125db15bcaebee79770a1fca59bc" id=7ae8a508-e08f-489f-a479-5212a9219d98 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.120857731Z" level=info msg="Started container" PID=1865 containerID=b25fa25074c8ca610e9a68687a7b15a7b0b8125db15bcaebee79770a1fca59bc description=kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-25khp/proxy id=7ae8a508-e08f-489f-a479-5212a9219d98 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c9dc504c9c3f255b0364c23e49ab87a1d5469de1fc1a0367ad8a2b6d41e97494
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.129003001Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30" id=6cd5c5c8-b0b0-437f-8e10-ca61897ef863 name=/runtime.v1.ImageService/PullImage
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.129726154Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard-web:1.7.0" id=2b5bdd5f-d9ed-4eed-b5a3-84759e0a58f0 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.132296655Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard-web:1.7.0" id=614f34bb-fa95-44d4-8fe6-1cf16965a90e name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.137097298Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-web-7f7574785f-h5czz/kubernetes-dashboard-web" id=fafe735f-7790-4f27-83a6-660d565cf225 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.137244058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.142264082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.14309182Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.169906843Z" level=info msg="Created container 4121c422fd7f07082964172b84890a33d2bd911602a06e1d0b279ae6a4604884: kubernetes-dashboard/kubernetes-dashboard-web-7f7574785f-h5czz/kubernetes-dashboard-web" id=fafe735f-7790-4f27-83a6-660d565cf225 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.170606259Z" level=info msg="Starting container: 4121c422fd7f07082964172b84890a33d2bd911602a06e1d0b279ae6a4604884" id=ed3b7cc2-f381-439b-a981-a1684f0b6988 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.173212041Z" level=info msg="Started container" PID=1935 containerID=4121c422fd7f07082964172b84890a33d2bd911602a06e1d0b279ae6a4604884 description=kubernetes-dashboard/kubernetes-dashboard-web-7f7574785f-h5czz/kubernetes-dashboard-web id=ed3b7cc2-f381-439b-a981-a1684f0b6988 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d6536c51414ccbabb686887e9377261239224552baf5cf9357bb5ec80a03140a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	4121c422fd7f0       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               4 seconds ago       Running             kubernetes-dashboard-web               0                   d6536c51414cc       kubernetes-dashboard-web-7f7574785f-h5czz               kubernetes-dashboard
	b25fa25074c8c       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                           4 seconds ago       Running             proxy                                  0                   c9dc504c9c3f2       kubernetes-dashboard-kong-78b7499b45-25khp              kubernetes-dashboard
	7cba6c407f308       docker.io/library/kong@sha256:73ac10ce4d2c5b3b8b4acd6c8117b4e72d1a201d95be2d51aeae8324d776a108                             7 seconds ago       Exited              clear-stale-pid                        0                   c9dc504c9c3f2       kubernetes-dashboard-kong-78b7499b45-25khp              kubernetes-dashboard
	31d0c579886a6       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              11 seconds ago      Running             kubernetes-dashboard-auth              0                   cf28f2c977003       kubernetes-dashboard-auth-657c9898c4-5dtgw              kubernetes-dashboard
	f35c30abddd50       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               12 seconds ago      Running             kubernetes-dashboard-api               0                   67c159b752019       kubernetes-dashboard-api-845cd649f7-g8x4c               kubernetes-dashboard
	e9b169e38837e       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   13 seconds ago      Running             kubernetes-dashboard-metrics-scraper   0                   0966e73726019       kubernetes-dashboard-metrics-scraper-594bbfb84b-brl2v   kubernetes-dashboard
	9ccabcaa81e1b       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                           14 seconds ago      Running             coredns                                0                   149e5476c3c37       coredns-7d764666f9-ckc9j                                kube-system
	d0c550beeeb65       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           14 seconds ago      Running             storage-provisioner                    0                   eff06a6d37cc7       storage-provisioner                                     kube-system
	ed72659cbbde1       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                                           25 seconds ago      Running             kindnet-cni                            1                   9747449bd87ae       kindnet-846n4                                           kube-system
	da4194cf3330d       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                                           25 seconds ago      Running             kube-proxy                             1                   95f806b887dbe       kube-proxy-6wg2n                                        kube-system
	fda13eb9d3da0       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                                           27 seconds ago      Running             kube-controller-manager                1                   8df8cf3c1ccb9       kube-controller-manager-newest-cni-837172               kube-system
	8eb6c2ea1b67c       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                                           27 seconds ago      Running             etcd                                   1                   3b262e6d2188d       etcd-newest-cni-837172                                  kube-system
	008ab506a7502       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                                           27 seconds ago      Running             kube-scheduler                         1                   ad1687b534153       kube-scheduler-newest-cni-837172                        kube-system
	f83a8d18b586d       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                                           27 seconds ago      Running             kube-apiserver                         1                   1c304c2a783ae       kube-apiserver-newest-cni-837172                        kube-system
	
	
	==> coredns [9ccabcaa81e1b6495f7bf45ad6fa9e2ea8e2e4a121f83d0d47cc7b4febce7387] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:52615 - 14779 "HINFO IN 4720937862764141502.7647195332102853574. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023038929s
	
	
	==> describe nodes <==
	Name:               newest-cni-837172
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-837172
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=newest-cni-837172
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_24_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:24:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-837172
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:25:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:25:06 +0000   Fri, 19 Dec 2025 03:24:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:25:06 +0000   Fri, 19 Dec 2025 03:24:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:25:06 +0000   Fri, 19 Dec 2025 03:24:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:25:06 +0000   Fri, 19 Dec 2025 03:25:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-837172
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                89c49ec5-bdd2-4caa-8f8e-fdb6f1a61d8d
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-ckc9j                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     57s
	  kube-system                 etcd-newest-cni-837172                                   100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         62s
	  kube-system                 kindnet-846n4                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-newest-cni-837172                         250m (3%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-newest-cni-837172                200m (2%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-6wg2n                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-newest-cni-837172                         100m (1%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-api-845cd649f7-g8x4c                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     23s
	  kubernetes-dashboard        kubernetes-dashboard-auth-657c9898c4-5dtgw               100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     23s
	  kubernetes-dashboard        kubernetes-dashboard-kong-78b7499b45-25khp               0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-594bbfb84b-brl2v    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     23s
	  kubernetes-dashboard        kubernetes-dashboard-web-7f7574785f-h5czz                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1250m (15%)  1100m (13%)
	  memory             1020Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  58s   node-controller  Node newest-cni-837172 event: Registered Node newest-cni-837172 in Controller
	  Normal  RegisteredNode  24s   node-controller  Node newest-cni-837172 event: Registered Node newest-cni-837172 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [8eb6c2ea1b67c5f966862ab6d7cb34785fcf89532732134cb85e859b4ed40cd2] <==
	{"level":"info","ts":"2025-12-19T03:24:53.316185Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-19T03:24:53.316185Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-19T03:24:53.315347Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-19T03:24:53.316390Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-19T03:24:53.316021Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-19T03:24:53.316547Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-19T03:24:53.315313Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-19T03:24:53.804906Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-19T03:24:53.804965Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-19T03:24:53.805040Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-19T03:24:53.805062Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:24:53.805088Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-19T03:24:53.805898Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-19T03:24:53.805925Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:24:53.805945Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-19T03:24:53.805955Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-19T03:24:53.806554Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-837172 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-19T03:24:53.806583Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:24:53.806628Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:24:53.806951Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T03:24:53.806985Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T03:24:53.807880Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:24:53.807961Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:24:53.810837Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T03:24:53.811058Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 03:25:21 up  1:07,  0 user,  load average: 1.81, 1.01, 1.25
	Linux newest-cni-837172 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ed72659cbbde1ac0d1a3f92bad6e21b22b83b110f25c911c29be464bd98e904e] <==
	I1219 03:24:56.283403       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1219 03:24:56.283695       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1219 03:24:56.380258       1 main.go:148] setting mtu 1500 for CNI 
	I1219 03:24:56.380302       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 03:24:56.380328       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T03:24:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 03:24:56.583349       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 03:24:56.583446       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 03:24:56.583690       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 03:24:56.584082       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 03:24:56.980388       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 03:24:56.980436       1 metrics.go:72] Registering metrics
	I1219 03:24:56.980506       1 controller.go:711] "Syncing nftables rules"
	I1219 03:25:06.584170       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:25:06.584239       1 main.go:301] handling current node
	I1219 03:25:16.583851       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:25:16.583949       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f83a8d18b586d30bed3f7b09ab702d2f91fad51a1abf5039cd003618810c256f] <==
	I1219 03:24:56.509947       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	W1219 03:24:57.830896       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 03:24:57.844534       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:24:57.854195       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:24:57.869749       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:24:57.880084       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:24:57.891480       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:24:57.900454       1 logging.go:55] [core] [Channel #283 SubChannel #284]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:24:57.919206       1 logging.go:55] [core] [Channel #287 SubChannel #288]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:24:57.933614       1 logging.go:55] [core] [Channel #291 SubChannel #292]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:24:57.942690       1 logging.go:55] [core] [Channel #295 SubChannel #296]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:24:57.953011       1 logging.go:55] [core] [Channel #299 SubChannel #300]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:24:57.967794       1 logging.go:55] [core] [Channel #303 SubChannel #304]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1219 03:24:58.840989       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 03:24:58.892973       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:24:58.897275       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:24:58.908863       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.111.204.138"}
	I1219 03:24:58.909147       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.219.4"}
	I1219 03:24:58.911243       1 controller.go:667] quota admission added evaluator for: endpoints
	I1219 03:24:58.912899       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.97.207.119"}
	I1219 03:24:58.916386       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.102.10.174"}
	I1219 03:24:58.918947       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:24:58.919674       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.96.83.11"}
	I1219 03:24:58.924038       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 03:24:58.935527       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [fda13eb9d3da07d79f3fb0741d2e97dea1c1da35c264d9c835de5f407e51329b] <==
	I1219 03:24:57.915028       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.915185       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.915199       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.915384       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.915203       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.915213       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.916385       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.916727       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.916818       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.917235       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.917303       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.917618       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.917673       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.918552       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.920346       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.921998       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:24:57.928160       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-837172"
	I1219 03:24:57.929042       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1219 03:24:59.014245       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:59.017306       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:59.017416       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1219 03:24:59.017436       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1219 03:24:59.023117       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:59.026676       1 shared_informer.go:377] "Caches are synced"
	I1219 03:25:07.931909       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [da4194cf3330daac8fb3341a58e3cc51bc2a445831557d5264d994bb830c2072] <==
	I1219 03:24:56.134456       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:24:56.202473       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:24:56.303418       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:56.303473       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1219 03:24:56.303581       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:24:56.321151       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:24:56.321218       1 server_linux.go:136] "Using iptables Proxier"
	I1219 03:24:56.326635       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:24:56.327076       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 03:24:56.327120       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:24:56.328506       1 config.go:200] "Starting service config controller"
	I1219 03:24:56.328729       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:24:56.328597       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:24:56.328806       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:24:56.328806       1 config.go:309] "Starting node config controller"
	I1219 03:24:56.328620       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:24:56.328830       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:24:56.328819       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:24:56.429798       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:24:56.429839       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:24:56.429867       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:24:56.433784       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [008ab506a7502be9960429732f171f4e4404010c1d76b19a75fa46ee6f21e968] <==
	I1219 03:24:53.605450       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:24:54.696966       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:24:54.700115       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:24:54.700155       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:24:54.700165       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:24:54.719764       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1219 03:24:54.719861       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:24:54.722680       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:24:54.722733       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:24:54.722911       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:24:54.723111       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:24:54.823789       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 19 03:25:08 newest-cni-837172 kubelet[674]: E1219 03:25:08.028471     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-837172" containerName="kube-controller-manager"
	Dec 19 03:25:08 newest-cni-837172 kubelet[674]: I1219 03:25:08.757680     674 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:25:08 newest-cni-837172 kubelet[674]: I1219 03:25:08.757783     674 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:25:08 newest-cni-837172 kubelet[674]: E1219 03:25:08.762085     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-837172" containerName="kube-scheduler"
	Dec 19 03:25:08 newest-cni-837172 kubelet[674]: E1219 03:25:08.762195     674 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-brl2v" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:25:08 newest-cni-837172 kubelet[674]: E1219 03:25:08.762418     674 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ckc9j" containerName="coredns"
	Dec 19 03:25:09 newest-cni-837172 kubelet[674]: I1219 03:25:09.659161     674 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:25:09 newest-cni-837172 kubelet[674]: I1219 03:25:09.659253     674 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:25:09 newest-cni-837172 kubelet[674]: E1219 03:25:09.768049     674 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ckc9j" containerName="coredns"
	Dec 19 03:25:09 newest-cni-837172 kubelet[674]: E1219 03:25:09.768245     674 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-brl2v" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:25:09 newest-cni-837172 kubelet[674]: I1219 03:25:09.778751     674 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-brl2v" podStartSLOduration=10.947466533 podStartE2EDuration="11.778731444s" podCreationTimestamp="2025-12-19 03:24:58 +0000 UTC" firstStartedPulling="2025-12-19 03:25:07.054076537 +0000 UTC m=+14.442233161" lastFinishedPulling="2025-12-19 03:25:07.885341451 +0000 UTC m=+15.273498072" observedRunningTime="2025-12-19 03:25:08.773564172 +0000 UTC m=+16.161720798" watchObservedRunningTime="2025-12-19 03:25:09.778731444 +0000 UTC m=+17.166888073"
	Dec 19 03:25:09 newest-cni-837172 kubelet[674]: I1219 03:25:09.790896     674 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-api-845cd649f7-g8x4c" podStartSLOduration=10.089408675 podStartE2EDuration="11.790876249s" podCreationTimestamp="2025-12-19 03:24:58 +0000 UTC" firstStartedPulling="2025-12-19 03:25:07.055625057 +0000 UTC m=+14.443781678" lastFinishedPulling="2025-12-19 03:25:08.757092644 +0000 UTC m=+16.145249252" observedRunningTime="2025-12-19 03:25:09.778927187 +0000 UTC m=+17.167083813" watchObservedRunningTime="2025-12-19 03:25:09.790876249 +0000 UTC m=+17.179032877"
	Dec 19 03:25:09 newest-cni-837172 kubelet[674]: I1219 03:25:09.791030     674 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-auth-657c9898c4-5dtgw" podStartSLOduration=9.188099307 podStartE2EDuration="11.791021669s" podCreationTimestamp="2025-12-19 03:24:58 +0000 UTC" firstStartedPulling="2025-12-19 03:25:07.055656771 +0000 UTC m=+14.443813387" lastFinishedPulling="2025-12-19 03:25:09.658579125 +0000 UTC m=+17.046735749" observedRunningTime="2025-12-19 03:25:09.79036217 +0000 UTC m=+17.178518796" watchObservedRunningTime="2025-12-19 03:25:09.791021669 +0000 UTC m=+17.179178296"
	Dec 19 03:25:14 newest-cni-837172 kubelet[674]: E1219 03:25:14.783826     674 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-25khp" containerName="proxy"
	Dec 19 03:25:16 newest-cni-837172 kubelet[674]: I1219 03:25:16.131737     674 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:25:16 newest-cni-837172 kubelet[674]: I1219 03:25:16.131824     674 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:25:16 newest-cni-837172 kubelet[674]: E1219 03:25:16.791409     674 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-25khp" containerName="proxy"
	Dec 19 03:25:16 newest-cni-837172 kubelet[674]: I1219 03:25:16.816435     674 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-web-7f7574785f-h5czz" podStartSLOduration=9.743654321 podStartE2EDuration="18.816413772s" podCreationTimestamp="2025-12-19 03:24:58 +0000 UTC" firstStartedPulling="2025-12-19 03:25:07.05843105 +0000 UTC m=+14.446587657" lastFinishedPulling="2025-12-19 03:25:16.131190497 +0000 UTC m=+23.519347108" observedRunningTime="2025-12-19 03:25:16.81595879 +0000 UTC m=+24.204115417" watchObservedRunningTime="2025-12-19 03:25:16.816413772 +0000 UTC m=+24.204570402"
	Dec 19 03:25:16 newest-cni-837172 kubelet[674]: I1219 03:25:16.816574     674 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-25khp" podStartSLOduration=11.088893528 podStartE2EDuration="18.816564642s" podCreationTimestamp="2025-12-19 03:24:58 +0000 UTC" firstStartedPulling="2025-12-19 03:25:07.058275106 +0000 UTC m=+14.446431712" lastFinishedPulling="2025-12-19 03:25:14.785946217 +0000 UTC m=+22.174102826" observedRunningTime="2025-12-19 03:25:16.806276321 +0000 UTC m=+24.194433014" watchObservedRunningTime="2025-12-19 03:25:16.816564642 +0000 UTC m=+24.204721270"
	Dec 19 03:25:17 newest-cni-837172 kubelet[674]: E1219 03:25:17.795616     674 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-25khp" containerName="proxy"
	Dec 19 03:25:18 newest-cni-837172 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 19 03:25:18 newest-cni-837172 kubelet[674]: I1219 03:25:18.219378     674 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 19 03:25:18 newest-cni-837172 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 19 03:25:18 newest-cni-837172 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 19 03:25:18 newest-cni-837172 systemd[1]: kubelet.service: Consumed 1.437s CPU time.
	
	
	==> kubernetes-dashboard [31d0c579886a60b8b9f5ac47b7e2223c4edc5ba9511956f59b46b4e6e8a493b5] <==
	I1219 03:25:09.785271       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:25:09.785340       1 init.go:49] Using in-cluster config
	I1219 03:25:09.785458       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [4121c422fd7f07082964172b84890a33d2bd911602a06e1d0b279ae6a4604884] <==
	I1219 03:25:16.249912       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:25:16.249964       1 init.go:48] Using in-cluster config
	I1219 03:25:16.250124       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [e9b169e38837e0e14dc98c97fe6c76f4950aa4f2569e237239b942cffc704403] <==
	10.42.0.1 - - [19/Dec/2025:03:25:08 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	I1219 03:25:07.961620       1 main.go:43] "Starting Metrics Scraper" version="1.2.2"
	W1219 03:25:07.961690       1 client_config.go:667] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I1219 03:25:07.961838       1 main.go:51] Kubernetes host: https://10.96.0.1:443
	I1219 03:25:07.961850       1 main.go:52] Namespace(s): []
	
	
	==> kubernetes-dashboard [f35c30abddd500953c627635a6009dfd10e461179831ce083750a776a74432d0] <==
	I1219 03:25:08.887252       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:25:08.887322       1 init.go:49] Using in-cluster config
	I1219 03:25:08.887553       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:25:08.887567       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:25:08.887573       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:25:08.887579       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:25:08.894002       1 main.go:119] "Successful initial request to the apiserver" version="v1.35.0-rc.1"
	I1219 03:25:08.894029       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:25:08.899584       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	I1219 03:25:08.904271       1 manager.go:101] Successful request to sidecar
	
	
	==> storage-provisioner [d0c550beeeb6567d0a1c34c8c5920c0f2dd33b9ad81073dd94e7b76aaff5887a] <==
	I1219 03:25:07.089859       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1219 03:25:07.098334       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1219 03:25:07.098379       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1219 03:25:07.100956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:07.105386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1219 03:25:07.106406       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1219 03:25:07.106494       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"66bc8803-98fb-4cc5-b38a-9ba3185661dc", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-837172_d54e61ba-f8df-4afd-a954-60be7880f59b became leader
	I1219 03:25:07.106584       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-837172_d54e61ba-f8df-4afd-a954-60be7880f59b!
	W1219 03:25:07.109354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:07.112663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1219 03:25:07.207772       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-837172_d54e61ba-f8df-4afd-a954-60be7880f59b!
	W1219 03:25:09.116645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:09.120909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:11.124826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:11.134636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:13.138431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:13.143008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:15.146121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:15.151340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:17.157314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:17.161181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:19.164435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:19.168673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:21.171645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:21.175779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-837172 -n newest-cni-837172
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-837172 -n newest-cni-837172: exit status 2 (325.950076ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-837172 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-837172
helpers_test.go:244: (dbg) docker inspect newest-cni-837172:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "351fe078c7b374b02eac438980bbfd60e0b4b9abb1fae5a6897f5bc143480d83",
	        "Created": "2025-12-19T03:24:05.774434179Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 381052,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-19T03:24:46.434199163Z",
	            "FinishedAt": "2025-12-19T03:24:45.196372534Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/351fe078c7b374b02eac438980bbfd60e0b4b9abb1fae5a6897f5bc143480d83/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/351fe078c7b374b02eac438980bbfd60e0b4b9abb1fae5a6897f5bc143480d83/hostname",
	        "HostsPath": "/var/lib/docker/containers/351fe078c7b374b02eac438980bbfd60e0b4b9abb1fae5a6897f5bc143480d83/hosts",
	        "LogPath": "/var/lib/docker/containers/351fe078c7b374b02eac438980bbfd60e0b4b9abb1fae5a6897f5bc143480d83/351fe078c7b374b02eac438980bbfd60e0b4b9abb1fae5a6897f5bc143480d83-json.log",
	        "Name": "/newest-cni-837172",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-837172:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-837172",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "351fe078c7b374b02eac438980bbfd60e0b4b9abb1fae5a6897f5bc143480d83",
	                "LowerDir": "/var/lib/docker/overlay2/41fe36c0fb6be8853bfc37ac0758cc1fc327dec5ab6c2b0699f7042493b2612f-init/diff:/var/lib/docker/overlay2/73b5c42cc05e61e30155987914ace628c2a1ff62f85df8ee626f47925bf99b7d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/41fe36c0fb6be8853bfc37ac0758cc1fc327dec5ab6c2b0699f7042493b2612f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/41fe36c0fb6be8853bfc37ac0758cc1fc327dec5ab6c2b0699f7042493b2612f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/41fe36c0fb6be8853bfc37ac0758cc1fc327dec5ab6c2b0699f7042493b2612f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-837172",
	                "Source": "/var/lib/docker/volumes/newest-cni-837172/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-837172",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-837172",
	                "name.minikube.sigs.k8s.io": "newest-cni-837172",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "66035efdff628f49fa69a1f0aeec65519fe4987d90b38ba7f18b9aef25aebc5c",
	            "SandboxKey": "/var/run/docker/netns/66035efdff62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-837172": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "031824ca2cfc364eb4fab915cefaa7a9d15393eeb43e3a28ecfa7e5605c16dd1",
	                    "EndpointID": "405696e6393191148b9f7f05ba6247a0756e4c86e23bcdfc02ec7737ea0dce9f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "d6:31:d4:6e:c7:36",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-837172",
	                        "351fe078c7b3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-837172 -n newest-cni-837172
E1219 03:25:22.381363    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/calico-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-837172 -n newest-cni-837172: exit status 2 (324.652313ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-837172 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-837172 logs -n 25: (1.142215993s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-717222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:05 UTC │
	│ start   │ -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:05 UTC │ 19 Dec 25 03:06 UTC │
	│ image   │ old-k8s-version-433330 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p old-k8s-version-433330 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ image   │ no-preload-278042 image list --format=json                                                                                                                                                                                                         │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:23 UTC │
	│ pause   │ -p no-preload-278042 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │                     │
	│ delete  │ -p old-k8s-version-433330                                                                                                                                                                                                                          │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:23 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p old-k8s-version-433330                                                                                                                                                                                                                          │ old-k8s-version-433330       │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ start   │ -p newest-cni-837172 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p no-preload-278042                                                                                                                                                                                                                               │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p no-preload-278042                                                                                                                                                                                                                               │ no-preload-278042            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ addons  │ enable metrics-server -p newest-cni-837172 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	│ stop    │ -p newest-cni-837172 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ image   │ embed-certs-805185 image list --format=json                                                                                                                                                                                                        │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ pause   │ -p embed-certs-805185 --alsologtostderr -v=1                                                                                                                                                                                                       │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	│ delete  │ -p embed-certs-805185                                                                                                                                                                                                                              │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ image   │ default-k8s-diff-port-717222 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ pause   │ -p default-k8s-diff-port-717222 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │                     │
	│ delete  │ -p embed-certs-805185                                                                                                                                                                                                                              │ embed-certs-805185           │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ addons  │ enable dashboard -p newest-cni-837172 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ start   │ -p newest-cni-837172 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:25 UTC │
	│ delete  │ -p default-k8s-diff-port-717222                                                                                                                                                                                                                    │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ delete  │ -p default-k8s-diff-port-717222                                                                                                                                                                                                                    │ default-k8s-diff-port-717222 │ jenkins │ v1.37.0 │ 19 Dec 25 03:24 UTC │ 19 Dec 25 03:24 UTC │
	│ image   │ newest-cni-837172 image list --format=json                                                                                                                                                                                                         │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:25 UTC │ 19 Dec 25 03:25 UTC │
	│ pause   │ -p newest-cni-837172 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-837172            │ jenkins │ v1.37.0 │ 19 Dec 25 03:25 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:24:46
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:24:46.186425  380735 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:24:46.186684  380735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:46.186694  380735 out.go:374] Setting ErrFile to fd 2...
	I1219 03:24:46.186711  380735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:24:46.186932  380735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 03:24:46.187393  380735 out.go:368] Setting JSON to false
	I1219 03:24:46.188519  380735 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4037,"bootTime":1766110649,"procs":278,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:24:46.188572  380735 start.go:143] virtualization: kvm guest
	I1219 03:24:46.190437  380735 out.go:179] * [newest-cni-837172] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:24:46.191829  380735 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:24:46.191879  380735 notify.go:221] Checking for updates...
	I1219 03:24:46.194410  380735 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:24:46.195933  380735 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:24:46.197315  380735 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 03:24:46.198516  380735 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:24:46.199874  380735 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:24:46.201738  380735 config.go:182] Loaded profile config "newest-cni-837172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:46.202513  380735 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:24:46.231628  380735 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 03:24:46.231787  380735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:46.296408  380735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-19 03:24:46.285802205 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:46.296560  380735 docker.go:319] overlay module found
	I1219 03:24:46.300911  380735 out.go:179] * Using the docker driver based on existing profile
	I1219 03:24:46.302079  380735 start.go:309] selected driver: docker
	I1219 03:24:46.302097  380735 start.go:928] validating driver "docker" against &{Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:46.302197  380735 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:24:46.302844  380735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 03:24:46.360231  380735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-12-19 03:24:46.349155163 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 03:24:46.360633  380735 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:24:46.360678  380735 cni.go:84] Creating CNI manager for ""
	I1219 03:24:46.360796  380735 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:24:46.360862  380735 start.go:353] cluster config:
	{Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:46.363384  380735 out.go:179] * Starting "newest-cni-837172" primary control-plane node in "newest-cni-837172" cluster
	I1219 03:24:46.364575  380735 cache.go:134] Beginning downloading kic base image for docker with crio
	I1219 03:24:46.365748  380735 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1219 03:24:46.366784  380735 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:24:46.366837  380735 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1219 03:24:46.366858  380735 cache.go:65] Caching tarball of preloaded images
	I1219 03:24:46.366856  380735 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1219 03:24:46.366954  380735 preload.go:238] Found /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1219 03:24:46.366968  380735 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1219 03:24:46.367086  380735 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/config.json ...
	I1219 03:24:46.387214  380735 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1219 03:24:46.387233  380735 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1219 03:24:46.387251  380735 cache.go:243] Successfully downloaded all kic artifacts
	I1219 03:24:46.387281  380735 start.go:360] acquireMachinesLock for newest-cni-837172: {Name:mk7db0c3e7d0fad2e8a414af15e136d817446719 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:24:46.387335  380735 start.go:364] duration metric: took 36.004µs to acquireMachinesLock for "newest-cni-837172"
	I1219 03:24:46.387352  380735 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:24:46.387359  380735 fix.go:54] fixHost starting: 
	I1219 03:24:46.387582  380735 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:46.405829  380735 fix.go:112] recreateIfNeeded on newest-cni-837172: state=Stopped err=<nil>
	W1219 03:24:46.405867  380735 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:24:46.407595  380735 out.go:252] * Restarting existing docker container for "newest-cni-837172" ...
	I1219 03:24:46.407668  380735 cli_runner.go:164] Run: docker start newest-cni-837172
	I1219 03:24:46.661365  380735 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:46.683720  380735 kic.go:430] container "newest-cni-837172" state is running.
	I1219 03:24:46.684145  380735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-837172
	I1219 03:24:46.704487  380735 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/config.json ...
	I1219 03:24:46.704785  380735 machine.go:94] provisionDockerMachine start ...
	I1219 03:24:46.704878  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:46.724642  380735 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:46.724907  380735 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1219 03:24:46.724924  380735 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:24:46.725609  380735 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38528->127.0.0.1:33143: read: connection reset by peer
	I1219 03:24:49.870786  380735 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-837172
	
	I1219 03:24:49.870812  380735 ubuntu.go:182] provisioning hostname "newest-cni-837172"
	I1219 03:24:49.870871  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:49.891419  380735 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:49.891637  380735 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1219 03:24:49.891650  380735 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-837172 && echo "newest-cni-837172" | sudo tee /etc/hostname
	I1219 03:24:50.049680  380735 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-837172
	
	I1219 03:24:50.049784  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:50.069449  380735 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:50.069652  380735 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1219 03:24:50.069668  380735 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-837172' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-837172/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-837172' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:24:50.218589  380735 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:24:50.218622  380735 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22230-4987/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-4987/.minikube}
	I1219 03:24:50.218647  380735 ubuntu.go:190] setting up certificates
	I1219 03:24:50.218656  380735 provision.go:84] configureAuth start
	I1219 03:24:50.218714  380735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-837172
	I1219 03:24:50.237682  380735 provision.go:143] copyHostCerts
	I1219 03:24:50.237775  380735 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem, removing ...
	I1219 03:24:50.237799  380735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem
	I1219 03:24:50.237904  380735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/key.pem (1675 bytes)
	I1219 03:24:50.238051  380735 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem, removing ...
	I1219 03:24:50.238064  380735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem
	I1219 03:24:50.238108  380735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/ca.pem (1078 bytes)
	I1219 03:24:50.238226  380735 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem, removing ...
	I1219 03:24:50.238237  380735 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem
	I1219 03:24:50.238274  380735 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-4987/.minikube/cert.pem (1123 bytes)
	I1219 03:24:50.238365  380735 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem org=jenkins.newest-cni-837172 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-837172]
	I1219 03:24:50.349862  380735 provision.go:177] copyRemoteCerts
	I1219 03:24:50.349937  380735 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:24:50.349991  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:50.369020  380735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:50.471178  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1219 03:24:50.489741  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1219 03:24:50.507980  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 03:24:50.527578  380735 provision.go:87] duration metric: took 308.910313ms to configureAuth
	I1219 03:24:50.527603  380735 ubuntu.go:206] setting minikube options for container-runtime
	I1219 03:24:50.527876  380735 config.go:182] Loaded profile config "newest-cni-837172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:50.527980  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:50.555132  380735 main.go:144] libmachine: Using SSH client type: native
	I1219 03:24:50.555432  380735 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1219 03:24:50.555457  380735 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1219 03:24:50.863906  380735 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1219 03:24:50.863934  380735 machine.go:97] duration metric: took 4.159132031s to provisionDockerMachine
	I1219 03:24:50.863949  380735 start.go:293] postStartSetup for "newest-cni-837172" (driver="docker")
	I1219 03:24:50.863963  380735 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:24:50.864035  380735 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:24:50.864071  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:50.882427  380735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:50.984259  380735 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:24:50.987947  380735 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1219 03:24:50.987977  380735 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1219 03:24:50.987988  380735 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/addons for local assets ...
	I1219 03:24:50.988034  380735 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-4987/.minikube/files for local assets ...
	I1219 03:24:50.988109  380735 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem -> 85362.pem in /etc/ssl/certs
	I1219 03:24:50.988213  380735 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:24:50.996062  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:24:51.013216  380735 start.go:296] duration metric: took 149.250887ms for postStartSetup
	I1219 03:24:51.013297  380735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:24:51.013347  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:51.031896  380735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:51.129977  380735 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1219 03:24:51.134327  380735 fix.go:56] duration metric: took 4.746960414s for fixHost
	I1219 03:24:51.134354  380735 start.go:83] releasing machines lock for "newest-cni-837172", held for 4.747008362s
	I1219 03:24:51.134413  380735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-837172
	I1219 03:24:51.152237  380735 ssh_runner.go:195] Run: cat /version.json
	I1219 03:24:51.152280  380735 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:24:51.152347  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:51.152286  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:51.171423  380735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:51.171779  380735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:51.322325  380735 ssh_runner.go:195] Run: systemctl --version
	I1219 03:24:51.328977  380735 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1219 03:24:51.363204  380735 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:24:51.367982  380735 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:24:51.368049  380735 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:24:51.376326  380735 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1219 03:24:51.376351  380735 start.go:496] detecting cgroup driver to use...
	I1219 03:24:51.376382  380735 detect.go:190] detected "systemd" cgroup driver on host os
	I1219 03:24:51.376426  380735 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 03:24:51.391379  380735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 03:24:51.403663  380735 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:24:51.403742  380735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:24:51.418272  380735 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:24:51.431003  380735 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:24:51.510779  380735 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:24:51.597970  380735 docker.go:234] disabling docker service ...
	I1219 03:24:51.598031  380735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:24:51.617644  380735 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:24:51.630874  380735 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:24:51.723247  380735 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:24:51.802517  380735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:24:51.815173  380735 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:24:51.830786  380735 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1219 03:24:51.830850  380735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:51.839797  380735 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1219 03:24:51.839858  380735 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:51.848536  380735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:51.857048  380735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:51.865481  380735 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:24:51.873127  380735 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:51.881668  380735 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:51.889927  380735 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1219 03:24:51.898295  380735 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:24:51.905303  380735 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:24:51.912468  380735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:51.998351  380735 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1219 03:24:52.181886  380735 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1219 03:24:52.181963  380735 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1219 03:24:52.186208  380735 start.go:564] Will wait 60s for crictl version
	I1219 03:24:52.186277  380735 ssh_runner.go:195] Run: which crictl
	I1219 03:24:52.189939  380735 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1219 03:24:52.215158  380735 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1219 03:24:52.215276  380735 ssh_runner.go:195] Run: crio --version
	I1219 03:24:52.246525  380735 ssh_runner.go:195] Run: crio --version
	I1219 03:24:52.281981  380735 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1219 03:24:52.284023  380735 cli_runner.go:164] Run: docker network inspect newest-cni-837172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1219 03:24:52.302369  380735 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1219 03:24:52.306388  380735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:24:52.320128  380735 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1219 03:24:52.322178  380735 kubeadm.go:884] updating cluster {Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:24:52.322348  380735 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1219 03:24:52.322414  380735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:24:52.358006  380735 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:24:52.358032  380735 crio.go:433] Images already preloaded, skipping extraction
	I1219 03:24:52.358082  380735 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:24:52.389789  380735 crio.go:514] all images are preloaded for cri-o runtime.
	I1219 03:24:52.389814  380735 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:24:52.389824  380735 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1219 03:24:52.389944  380735 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-837172 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:24:52.390038  380735 ssh_runner.go:195] Run: crio config
	I1219 03:24:52.436179  380735 cni.go:84] Creating CNI manager for ""
	I1219 03:24:52.436200  380735 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1219 03:24:52.436213  380735 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1219 03:24:52.436234  380735 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-837172 NodeName:newest-cni-837172 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:24:52.436372  380735 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-837172"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:24:52.436432  380735 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1219 03:24:52.445103  380735 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:24:52.445182  380735 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:24:52.452815  380735 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1219 03:24:52.465273  380735 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1219 03:24:52.477483  380735 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1219 03:24:52.489732  380735 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1219 03:24:52.493252  380735 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:24:52.503036  380735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:52.579766  380735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:24:52.609535  380735 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172 for IP: 192.168.76.2
	I1219 03:24:52.609561  380735 certs.go:195] generating shared ca certs ...
	I1219 03:24:52.609582  380735 certs.go:227] acquiring lock for ca certs: {Name:mk6396a9308fa3fe57deadca6131f62071b725e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:52.609757  380735 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key
	I1219 03:24:52.609813  380735 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key
	I1219 03:24:52.609828  380735 certs.go:257] generating profile certs ...
	I1219 03:24:52.609931  380735 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/client.key
	I1219 03:24:52.609994  380735 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key.48a05c1b
	I1219 03:24:52.610057  380735 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key
	I1219 03:24:52.610193  380735 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem (1338 bytes)
	W1219 03:24:52.610238  380735 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536_empty.pem, impossibly tiny 0 bytes
	I1219 03:24:52.610253  380735 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:24:52.610293  380735 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/ca.pem (1078 bytes)
	I1219 03:24:52.610325  380735 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:24:52.610365  380735 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/certs/key.pem (1675 bytes)
	I1219 03:24:52.610416  380735 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem (1708 bytes)
	I1219 03:24:52.611174  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:24:52.630297  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:24:52.649645  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:24:52.668714  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1219 03:24:52.693504  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1219 03:24:52.714057  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:24:52.732890  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:24:52.750858  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/newest-cni-837172/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:24:52.769121  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:24:52.787484  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/certs/8536.pem --> /usr/share/ca-certificates/8536.pem (1338 bytes)
	I1219 03:24:52.805878  380735 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/ssl/certs/85362.pem --> /usr/share/ca-certificates/85362.pem (1708 bytes)
	I1219 03:24:52.843366  380735 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:24:52.855775  380735 ssh_runner.go:195] Run: openssl version
	I1219 03:24:52.862232  380735 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:52.870154  380735 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:24:52.878020  380735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:52.882241  380735 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:25 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:52.882309  380735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:24:52.919049  380735 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:24:52.928111  380735 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8536.pem
	I1219 03:24:52.936360  380735 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8536.pem /etc/ssl/certs/8536.pem
	I1219 03:24:52.946014  380735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8536.pem
	I1219 03:24:52.950736  380735 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:33 /usr/share/ca-certificates/8536.pem
	I1219 03:24:52.950797  380735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8536.pem
	I1219 03:24:52.987167  380735 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:24:52.996543  380735 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/85362.pem
	I1219 03:24:53.005225  380735 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/85362.pem /etc/ssl/certs/85362.pem
	I1219 03:24:53.013457  380735 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85362.pem
	I1219 03:24:53.017511  380735 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:33 /usr/share/ca-certificates/85362.pem
	I1219 03:24:53.017575  380735 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85362.pem
	I1219 03:24:53.053591  380735 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:24:53.061829  380735 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:24:53.065839  380735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:24:53.104722  380735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:24:53.147357  380735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:24:53.196987  380735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:24:53.248227  380735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:24:53.296410  380735 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:24:53.348441  380735 kubeadm.go:401] StartCluster: {Name:newest-cni-837172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-837172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:24:53.348565  380735 cri.go:57] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1219 03:24:53.348641  380735 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:24:53.380737  380735 cri.go:92] found id: "fda13eb9d3da07d79f3fb0741d2e97dea1c1da35c264d9c835de5f407e51329b"
	I1219 03:24:53.380764  380735 cri.go:92] found id: "8eb6c2ea1b67c5f966862ab6d7cb34785fcf89532732134cb85e859b4ed40cd2"
	I1219 03:24:53.380770  380735 cri.go:92] found id: "008ab506a7502be9960429732f171f4e4404010c1d76b19a75fa46ee6f21e968"
	I1219 03:24:53.380775  380735 cri.go:92] found id: "f83a8d18b586d30bed3f7b09ab702d2f91fad51a1abf5039cd003618810c256f"
	I1219 03:24:53.380780  380735 cri.go:92] found id: ""
	I1219 03:24:53.380835  380735 ssh_runner.go:195] Run: sudo runc list -f json
	W1219 03:24:53.394328  380735 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:24:53Z" level=error msg="open /run/runc: no such file or directory"
	I1219 03:24:53.394397  380735 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:24:53.403160  380735 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:24:53.403181  380735 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:24:53.403239  380735 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:24:53.411263  380735 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:24:53.411647  380735 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-837172" does not appear in /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:24:53.411788  380735 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-4987/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-837172" cluster setting kubeconfig missing "newest-cni-837172" context setting]
	I1219 03:24:53.412075  380735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:53.413388  380735 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:24:53.421111  380735 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1219 03:24:53.421154  380735 kubeadm.go:602] duration metric: took 17.965967ms to restartPrimaryControlPlane
	I1219 03:24:53.421173  380735 kubeadm.go:403] duration metric: took 72.742653ms to StartCluster
	I1219 03:24:53.421203  380735 settings.go:142] acquiring lock: {Name:mk65ec66d58b88cdccc174be1165cfbb6dcd8ebf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:53.421283  380735 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 03:24:53.421954  380735 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-4987/kubeconfig: {Name:mka882d608fabb562decf1b246525ec232d0fa1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:24:53.422207  380735 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1219 03:24:53.422282  380735 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:24:53.422378  380735 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-837172"
	I1219 03:24:53.422400  380735 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-837172"
	W1219 03:24:53.422409  380735 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:24:53.422411  380735 addons.go:70] Setting dashboard=true in profile "newest-cni-837172"
	I1219 03:24:53.422416  380735 addons.go:70] Setting default-storageclass=true in profile "newest-cni-837172"
	I1219 03:24:53.422437  380735 host.go:66] Checking if "newest-cni-837172" exists ...
	I1219 03:24:53.422440  380735 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-837172"
	I1219 03:24:53.422476  380735 config.go:182] Loaded profile config "newest-cni-837172": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 03:24:53.422438  380735 addons.go:239] Setting addon dashboard=true in "newest-cni-837172"
	W1219 03:24:53.422553  380735 addons.go:248] addon dashboard should already be in state true
	I1219 03:24:53.422582  380735 host.go:66] Checking if "newest-cni-837172" exists ...
	I1219 03:24:53.422787  380735 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:53.422943  380735 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:53.423132  380735 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:53.425001  380735 out.go:179] * Verifying Kubernetes components...
	I1219 03:24:53.426512  380735 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:24:53.447055  380735 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:24:53.447081  380735 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:24:53.447143  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:53.447655  380735 addons.go:239] Setting addon default-storageclass=true in "newest-cni-837172"
	W1219 03:24:53.447677  380735 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:24:53.447726  380735 host.go:66] Checking if "newest-cni-837172" exists ...
	I1219 03:24:53.448165  380735 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:53.448995  380735 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:24:53.450164  380735 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:24:53.450182  380735 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:24:53.450233  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:53.478441  380735 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:24:53.478467  380735 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:24:53.478528  380735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-837172
	I1219 03:24:53.485293  380735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:53.485316  380735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:53.503612  380735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/newest-cni-837172/id_rsa Username:docker}
	I1219 03:24:53.577150  380735 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:24:53.593619  380735 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:24:53.593684  380735 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:24:53.597901  380735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:24:53.600449  380735 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:24:53.608353  380735 api_server.go:72] duration metric: took 186.113057ms to wait for apiserver process to appear ...
	I1219 03:24:53.608386  380735 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:24:53.608408  380735 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:24:53.615591  380735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:24:54.672548  380735 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:24:54.672580  380735 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:24:54.672596  380735 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:24:54.694184  380735 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:24:54.694225  380735 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:24:55.108674  380735 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:24:55.113983  380735 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:24:55.114012  380735 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:24:55.156905  380735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.558972243s)
	I1219 03:24:55.156960  380735 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.556481995s)
	I1219 03:24:55.157040  380735 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:24:55.157067  380735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.541448182s)
	I1219 03:24:55.162193  380735 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:24:55.608534  380735 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:24:55.613492  380735 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:24:55.613529  380735 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:24:56.068569  380735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:24:56.109036  380735 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1219 03:24:56.114869  380735 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1219 03:24:56.115957  380735 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 03:24:56.115986  380735 api_server.go:131] duration metric: took 2.5075929s to wait for apiserver health ...
	I1219 03:24:56.116002  380735 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:24:56.120146  380735 system_pods.go:59] 8 kube-system pods found
	I1219 03:24:56.120185  380735 system_pods.go:61] "coredns-7d764666f9-ckc9j" [5bc3e758-2623-4eae-87fe-a58b932c9e87] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1219 03:24:56.120197  380735 system_pods.go:61] "etcd-newest-cni-837172" [59f28fae-3605-487b-a1b8-c3851c47abac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:24:56.120208  380735 system_pods.go:61] "kindnet-846n4" [b45c7fbd-085c-4972-b312-0973aab68ddc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1219 03:24:56.120218  380735 system_pods.go:61] "kube-apiserver-newest-cni-837172" [8d92900e-716d-42ad-9d88-1ca6d0ddf5c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:24:56.120227  380735 system_pods.go:61] "kube-controller-manager-newest-cni-837172" [46b3ad5a-64d1-4e1f-8bdf-ce613dcd6348] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:24:56.120239  380735 system_pods.go:61] "kube-proxy-6wg2n" [356cd689-df37-49ac-a3f2-1931978ccf64] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1219 03:24:56.120247  380735 system_pods.go:61] "kube-scheduler-newest-cni-837172" [da065d09-cc65-42e7-8e0d-9f9709cafaf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:24:56.120253  380735 system_pods.go:61] "storage-provisioner" [ba402c27-5828-489f-a656-bc0ef2e8f05e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1219 03:24:56.120265  380735 system_pods.go:74] duration metric: took 4.256236ms to wait for pod list to return data ...
	I1219 03:24:56.120278  380735 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:24:56.123330  380735 default_sa.go:45] found service account: "default"
	I1219 03:24:56.123351  380735 default_sa.go:55] duration metric: took 3.06783ms for default service account to be created ...
	I1219 03:24:56.123362  380735 kubeadm.go:587] duration metric: took 2.701129347s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:24:56.123383  380735 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:24:56.126040  380735 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1219 03:24:56.126068  380735 node_conditions.go:123] node cpu capacity is 8
	I1219 03:24:56.126085  380735 node_conditions.go:105] duration metric: took 2.695124ms to run NodePressure ...
	I1219 03:24:56.126099  380735 start.go:242] waiting for startup goroutines ...
	I1219 03:24:58.951395  380735 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (2.882776783s)
	I1219 03:24:58.951497  380735 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:24:59.119677  380735 addons.go:500] Verifying addon dashboard=true in "newest-cni-837172"
	I1219 03:24:59.119984  380735 cli_runner.go:164] Run: docker container inspect newest-cni-837172 --format={{.State.Status}}
	I1219 03:24:59.139581  380735 out.go:179] * Verifying dashboard addon...
	I1219 03:24:59.141351  380735 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:24:59.144824  380735 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:24:59.144839  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:24:59.644196  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:00.144643  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:00.645028  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:01.144630  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:01.645089  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:02.144961  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:02.644679  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:03.145333  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:03.644986  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:04.144758  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:04.645494  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:05.145229  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:05.644918  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:06.145818  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:06.644221  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:07.145109  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:07.645012  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:08.146248  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:08.645246  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:09.145504  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:09.644904  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:10.145240  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:10.645473  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:11.145036  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:11.644615  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:12.145741  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:12.645028  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:13.145328  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:13.644988  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:14.145412  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:14.644866  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:15.145841  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:15.645279  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:16.145389  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:16.645890  380735 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:25:17.144951  380735 kapi.go:107] duration metric: took 18.00359908s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:25:17.147090  380735 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-837172 addons enable metrics-server
	
	I1219 03:25:17.148352  380735 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1219 03:25:17.149617  380735 addons.go:546] duration metric: took 23.727342758s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1219 03:25:17.149650  380735 start.go:247] waiting for cluster config update ...
	I1219 03:25:17.149660  380735 start.go:256] writing updated cluster config ...
	I1219 03:25:17.149904  380735 ssh_runner.go:195] Run: rm -f paused
	I1219 03:25:17.210395  380735 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1219 03:25:17.212158  380735 out.go:179] * Done! kubectl is now configured to use "newest-cni-837172" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 19 03:25:13 newest-cni-837172 crio[519]: time="2025-12-19T03:25:13.774969101Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:25:13 newest-cni-837172 crio[519]: time="2025-12-19T03:25:13.804423367Z" level=info msg="Created container 7cba6c407f3081b501d70bb4e602843ddf95a6e38ff82f06cae4c64dd5241a53: kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-25khp/clear-stale-pid" id=f70de915-f2ed-4b2c-9d47-ca535726c0d9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:25:13 newest-cni-837172 crio[519]: time="2025-12-19T03:25:13.805106814Z" level=info msg="Starting container: 7cba6c407f3081b501d70bb4e602843ddf95a6e38ff82f06cae4c64dd5241a53" id=cbfede47-048f-4203-bde1-8919e8d1f6c9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:25:13 newest-cni-837172 crio[519]: time="2025-12-19T03:25:13.807492463Z" level=info msg="Started container" PID=1734 containerID=7cba6c407f3081b501d70bb4e602843ddf95a6e38ff82f06cae4c64dd5241a53 description=kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-25khp/clear-stale-pid id=cbfede47-048f-4203-bde1-8919e8d1f6c9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c9dc504c9c3f255b0364c23e49ab87a1d5469de1fc1a0367ad8a2b6d41e97494
	Dec 19 03:25:14 newest-cni-837172 crio[519]: time="2025-12-19T03:25:14.784262728Z" level=info msg="Checking image status: kong:3.9" id=bc9a5507-b8a4-4dad-9040-626db4d2673a name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:25:14 newest-cni-837172 crio[519]: time="2025-12-19T03:25:14.7844356Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 03:25:14 newest-cni-837172 crio[519]: time="2025-12-19T03:25:14.786577546Z" level=info msg="Checking image status: kong:3.9" id=db708e62-f4db-4a59-ad9c-118046a48b4d name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:25:14 newest-cni-837172 crio[519]: time="2025-12-19T03:25:14.786750705Z" level=info msg="Resolving \"kong\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 19 03:25:14 newest-cni-837172 crio[519]: time="2025-12-19T03:25:14.790785279Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-25khp/proxy" id=c8834068-3236-4b8c-bb08-e043c74b0baf name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:25:14 newest-cni-837172 crio[519]: time="2025-12-19T03:25:14.790919262Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.08682841Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.087585958Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.117557795Z" level=info msg="Created container b25fa25074c8ca610e9a68687a7b15a7b0b8125db15bcaebee79770a1fca59bc: kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-25khp/proxy" id=c8834068-3236-4b8c-bb08-e043c74b0baf name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.118243854Z" level=info msg="Starting container: b25fa25074c8ca610e9a68687a7b15a7b0b8125db15bcaebee79770a1fca59bc" id=7ae8a508-e08f-489f-a479-5212a9219d98 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.120857731Z" level=info msg="Started container" PID=1865 containerID=b25fa25074c8ca610e9a68687a7b15a7b0b8125db15bcaebee79770a1fca59bc description=kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-25khp/proxy id=7ae8a508-e08f-489f-a479-5212a9219d98 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c9dc504c9c3f255b0364c23e49ab87a1d5469de1fc1a0367ad8a2b6d41e97494
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.129003001Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30" id=6cd5c5c8-b0b0-437f-8e10-ca61897ef863 name=/runtime.v1.ImageService/PullImage
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.129726154Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard-web:1.7.0" id=2b5bdd5f-d9ed-4eed-b5a3-84759e0a58f0 name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.132296655Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard-web:1.7.0" id=614f34bb-fa95-44d4-8fe6-1cf16965a90e name=/runtime.v1.ImageService/ImageStatus
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.137097298Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-web-7f7574785f-h5czz/kubernetes-dashboard-web" id=fafe735f-7790-4f27-83a6-660d565cf225 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.137244058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.142264082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.14309182Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.169906843Z" level=info msg="Created container 4121c422fd7f07082964172b84890a33d2bd911602a06e1d0b279ae6a4604884: kubernetes-dashboard/kubernetes-dashboard-web-7f7574785f-h5czz/kubernetes-dashboard-web" id=fafe735f-7790-4f27-83a6-660d565cf225 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.170606259Z" level=info msg="Starting container: 4121c422fd7f07082964172b84890a33d2bd911602a06e1d0b279ae6a4604884" id=ed3b7cc2-f381-439b-a981-a1684f0b6988 name=/runtime.v1.RuntimeService/StartContainer
	Dec 19 03:25:16 newest-cni-837172 crio[519]: time="2025-12-19T03:25:16.173212041Z" level=info msg="Started container" PID=1935 containerID=4121c422fd7f07082964172b84890a33d2bd911602a06e1d0b279ae6a4604884 description=kubernetes-dashboard/kubernetes-dashboard-web-7f7574785f-h5czz/kubernetes-dashboard-web id=ed3b7cc2-f381-439b-a981-a1684f0b6988 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d6536c51414ccbabb686887e9377261239224552baf5cf9357bb5ec80a03140a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                      CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	4121c422fd7f0       docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30               7 seconds ago       Running             kubernetes-dashboard-web               0                   d6536c51414cc       kubernetes-dashboard-web-7f7574785f-h5czz               kubernetes-dashboard
	b25fa25074c8c       3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480                                                           7 seconds ago       Running             proxy                                  0                   c9dc504c9c3f2       kubernetes-dashboard-kong-78b7499b45-25khp              kubernetes-dashboard
	7cba6c407f308       docker.io/library/kong@sha256:73ac10ce4d2c5b3b8b4acd6c8117b4e72d1a201d95be2d51aeae8324d776a108                             9 seconds ago       Exited              clear-stale-pid                        0                   c9dc504c9c3f2       kubernetes-dashboard-kong-78b7499b45-25khp              kubernetes-dashboard
	31d0c579886a6       docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052              13 seconds ago      Running             kubernetes-dashboard-auth              0                   cf28f2c977003       kubernetes-dashboard-auth-657c9898c4-5dtgw              kubernetes-dashboard
	f35c30abddd50       docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2               14 seconds ago      Running             kubernetes-dashboard-api               0                   67c159b752019       kubernetes-dashboard-api-845cd649f7-g8x4c               kubernetes-dashboard
	e9b169e38837e       docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc   15 seconds ago      Running             kubernetes-dashboard-metrics-scraper   0                   0966e73726019       kubernetes-dashboard-metrics-scraper-594bbfb84b-brl2v   kubernetes-dashboard
	9ccabcaa81e1b       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                           16 seconds ago      Running             coredns                                0                   149e5476c3c37       coredns-7d764666f9-ckc9j                                kube-system
	d0c550beeeb65       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                           16 seconds ago      Running             storage-provisioner                    0                   eff06a6d37cc7       storage-provisioner                                     kube-system
	ed72659cbbde1       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                                           27 seconds ago      Running             kindnet-cni                            1                   9747449bd87ae       kindnet-846n4                                           kube-system
	da4194cf3330d       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                                           27 seconds ago      Running             kube-proxy                             1                   95f806b887dbe       kube-proxy-6wg2n                                        kube-system
	fda13eb9d3da0       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                                           30 seconds ago      Running             kube-controller-manager                1                   8df8cf3c1ccb9       kube-controller-manager-newest-cni-837172               kube-system
	8eb6c2ea1b67c       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                                           30 seconds ago      Running             etcd                                   1                   3b262e6d2188d       etcd-newest-cni-837172                                  kube-system
	008ab506a7502       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                                           30 seconds ago      Running             kube-scheduler                         1                   ad1687b534153       kube-scheduler-newest-cni-837172                        kube-system
	f83a8d18b586d       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                                           30 seconds ago      Running             kube-apiserver                         1                   1c304c2a783ae       kube-apiserver-newest-cni-837172                        kube-system
	
	
	==> coredns [9ccabcaa81e1b6495f7bf45ad6fa9e2ea8e2e4a121f83d0d47cc7b4febce7387] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:52615 - 14779 "HINFO IN 4720937862764141502.7647195332102853574. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023038929s
	
	
	==> describe nodes <==
	Name:               newest-cni-837172
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-837172
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=newest-cni-837172
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_24_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:24:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-837172
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:25:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:25:06 +0000   Fri, 19 Dec 2025 03:24:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:25:06 +0000   Fri, 19 Dec 2025 03:24:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:25:06 +0000   Fri, 19 Dec 2025 03:24:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:25:06 +0000   Fri, 19 Dec 2025 03:25:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-837172
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                89c49ec5-bdd2-4caa-8f8e-fdb6f1a61d8d
	  Boot ID:                    003a195b-ad66-4f40-bd04-61ac18f31982
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-ckc9j                                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     59s
	  kube-system                 etcd-newest-cni-837172                                   100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         64s
	  kube-system                 kindnet-846n4                                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      59s
	  kube-system                 kube-apiserver-newest-cni-837172                         250m (3%)     0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-controller-manager-newest-cni-837172                200m (2%)     0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-proxy-6wg2n                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-scheduler-newest-cni-837172                         100m (1%)     0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kubernetes-dashboard        kubernetes-dashboard-api-845cd649f7-g8x4c                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     25s
	  kubernetes-dashboard        kubernetes-dashboard-auth-657c9898c4-5dtgw               100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     25s
	  kubernetes-dashboard        kubernetes-dashboard-kong-78b7499b45-25khp               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-594bbfb84b-brl2v    100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     25s
	  kubernetes-dashboard        kubernetes-dashboard-web-7f7574785f-h5czz                100m (1%)     250m (3%)   200Mi (0%)       400Mi (1%)     25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1250m (15%)  1100m (13%)
	  memory             1020Mi (3%)  1820Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  60s   node-controller  Node newest-cni-837172 event: Registered Node newest-cni-837172 in Controller
	  Normal  RegisteredNode  26s   node-controller  Node newest-cni-837172 event: Registered Node newest-cni-837172 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0e fd 49 cf 22 d4 08 06
	[  +0.000391] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 1d 43 01 21 5d 08 06
	[ +52.179981] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 61 6d 35 fe d9 08 06
	[  +0.054551] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[Dec19 03:02] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.006626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[ +41.290311] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 e2 03 09 f9 ee 08 06
	[  +0.000339] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 f7 d8 9c d3 e1 08 06
	[ +11.263652] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 21 62 ef f2 0d 08 06
	[  +0.000512] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 79 a6 ee 24 67 08 06
	[  +0.000582] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 9b 25 28 cd 67 08 06
	[Dec19 03:04] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 0a f8 fc e6 0c d6 08 06
	[  +0.000346] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0a 68 bd ee 75 9a 08 06
	
	
	==> etcd [8eb6c2ea1b67c5f966862ab6d7cb34785fcf89532732134cb85e859b4ed40cd2] <==
	{"level":"info","ts":"2025-12-19T03:24:53.316185Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-19T03:24:53.316185Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-19T03:24:53.315347Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-19T03:24:53.316390Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-19T03:24:53.316021Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-19T03:24:53.316547Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-19T03:24:53.315313Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-19T03:24:53.804906Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-19T03:24:53.804965Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-19T03:24:53.805040Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-12-19T03:24:53.805062Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:24:53.805088Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-12-19T03:24:53.805898Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-19T03:24:53.805925Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-19T03:24:53.805945Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-12-19T03:24:53.805955Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-12-19T03:24:53.806554Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-837172 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-19T03:24:53.806583Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:24:53.806628Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:24:53.806951Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T03:24:53.806985Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T03:24:53.807880Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:24:53.807961Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T03:24:53.810837Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T03:24:53.811058Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 03:25:23 up  1:07,  0 user,  load average: 1.81, 1.01, 1.25
	Linux newest-cni-837172 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ed72659cbbde1ac0d1a3f92bad6e21b22b83b110f25c911c29be464bd98e904e] <==
	I1219 03:24:56.283403       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1219 03:24:56.283695       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1219 03:24:56.380258       1 main.go:148] setting mtu 1500 for CNI 
	I1219 03:24:56.380302       1 main.go:178] kindnetd IP family: "ipv4"
	I1219 03:24:56.380328       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-19T03:24:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1219 03:24:56.583349       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1219 03:24:56.583446       1 controller.go:381] "Waiting for informer caches to sync"
	I1219 03:24:56.583690       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1219 03:24:56.584082       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1219 03:24:56.980388       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1219 03:24:56.980436       1 metrics.go:72] Registering metrics
	I1219 03:24:56.980506       1 controller.go:711] "Syncing nftables rules"
	I1219 03:25:06.584170       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:25:06.584239       1 main.go:301] handling current node
	I1219 03:25:16.583851       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1219 03:25:16.583949       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f83a8d18b586d30bed3f7b09ab702d2f91fad51a1abf5039cd003618810c256f] <==
	I1219 03:24:56.509947       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	W1219 03:24:57.830896       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 03:24:57.844534       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:24:57.854195       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:24:57.869749       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:24:57.880084       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:24:57.891480       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:24:57.900454       1 logging.go:55] [core] [Channel #283 SubChannel #284]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:24:57.919206       1 logging.go:55] [core] [Channel #287 SubChannel #288]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:24:57.933614       1 logging.go:55] [core] [Channel #291 SubChannel #292]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:24:57.942690       1 logging.go:55] [core] [Channel #295 SubChannel #296]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:24:57.953011       1 logging.go:55] [core] [Channel #299 SubChannel #300]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 03:24:57.967794       1 logging.go:55] [core] [Channel #303 SubChannel #304]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1219 03:24:58.840989       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 03:24:58.892973       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1219 03:24:58.897275       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1219 03:24:58.908863       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.111.204.138"}
	I1219 03:24:58.909147       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.219.4"}
	I1219 03:24:58.911243       1 controller.go:667] quota admission added evaluator for: endpoints
	I1219 03:24:58.912899       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.97.207.119"}
	I1219 03:24:58.916386       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.102.10.174"}
	I1219 03:24:58.918947       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1219 03:24:58.919674       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.96.83.11"}
	I1219 03:24:58.924038       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1219 03:24:58.935527       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [fda13eb9d3da07d79f3fb0741d2e97dea1c1da35c264d9c835de5f407e51329b] <==
	I1219 03:24:57.915028       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.915185       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.915199       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.915384       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.915203       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.915213       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.916385       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.916727       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.916818       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.917235       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.917303       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.917618       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.917673       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.918552       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.920346       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:57.921998       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:24:57.928160       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-837172"
	I1219 03:24:57.929042       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1219 03:24:59.014245       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:59.017306       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:59.017416       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1219 03:24:59.017436       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1219 03:24:59.023117       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:59.026676       1 shared_informer.go:377] "Caches are synced"
	I1219 03:25:07.931909       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [da4194cf3330daac8fb3341a58e3cc51bc2a445831557d5264d994bb830c2072] <==
	I1219 03:24:56.134456       1 server_linux.go:53] "Using iptables proxy"
	I1219 03:24:56.202473       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:24:56.303418       1 shared_informer.go:377] "Caches are synced"
	I1219 03:24:56.303473       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1219 03:24:56.303581       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:24:56.321151       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1219 03:24:56.321218       1 server_linux.go:136] "Using iptables Proxier"
	I1219 03:24:56.326635       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:24:56.327076       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 03:24:56.327120       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:24:56.328506       1 config.go:200] "Starting service config controller"
	I1219 03:24:56.328729       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:24:56.328597       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:24:56.328806       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:24:56.328806       1 config.go:309] "Starting node config controller"
	I1219 03:24:56.328620       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:24:56.328830       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:24:56.328819       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:24:56.429798       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:24:56.429839       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:24:56.429867       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:24:56.433784       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [008ab506a7502be9960429732f171f4e4404010c1d76b19a75fa46ee6f21e968] <==
	I1219 03:24:53.605450       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:24:54.696966       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:24:54.700115       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:24:54.700155       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:24:54.700165       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:24:54.719764       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1219 03:24:54.719861       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:24:54.722680       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:24:54.722733       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:24:54.722911       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:24:54.723111       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:24:54.823789       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 19 03:25:08 newest-cni-837172 kubelet[674]: E1219 03:25:08.028471     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-837172" containerName="kube-controller-manager"
	Dec 19 03:25:08 newest-cni-837172 kubelet[674]: I1219 03:25:08.757680     674 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:25:08 newest-cni-837172 kubelet[674]: I1219 03:25:08.757783     674 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:25:08 newest-cni-837172 kubelet[674]: E1219 03:25:08.762085     674 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-837172" containerName="kube-scheduler"
	Dec 19 03:25:08 newest-cni-837172 kubelet[674]: E1219 03:25:08.762195     674 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-brl2v" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:25:08 newest-cni-837172 kubelet[674]: E1219 03:25:08.762418     674 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ckc9j" containerName="coredns"
	Dec 19 03:25:09 newest-cni-837172 kubelet[674]: I1219 03:25:09.659161     674 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:25:09 newest-cni-837172 kubelet[674]: I1219 03:25:09.659253     674 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:25:09 newest-cni-837172 kubelet[674]: E1219 03:25:09.768049     674 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ckc9j" containerName="coredns"
	Dec 19 03:25:09 newest-cni-837172 kubelet[674]: E1219 03:25:09.768245     674 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-brl2v" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:25:09 newest-cni-837172 kubelet[674]: I1219 03:25:09.778751     674 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-brl2v" podStartSLOduration=10.947466533 podStartE2EDuration="11.778731444s" podCreationTimestamp="2025-12-19 03:24:58 +0000 UTC" firstStartedPulling="2025-12-19 03:25:07.054076537 +0000 UTC m=+14.442233161" lastFinishedPulling="2025-12-19 03:25:07.885341451 +0000 UTC m=+15.273498072" observedRunningTime="2025-12-19 03:25:08.773564172 +0000 UTC m=+16.161720798" watchObservedRunningTime="2025-12-19 03:25:09.778731444 +0000 UTC m=+17.166888073"
	Dec 19 03:25:09 newest-cni-837172 kubelet[674]: I1219 03:25:09.790896     674 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-api-845cd649f7-g8x4c" podStartSLOduration=10.089408675 podStartE2EDuration="11.790876249s" podCreationTimestamp="2025-12-19 03:24:58 +0000 UTC" firstStartedPulling="2025-12-19 03:25:07.055625057 +0000 UTC m=+14.443781678" lastFinishedPulling="2025-12-19 03:25:08.757092644 +0000 UTC m=+16.145249252" observedRunningTime="2025-12-19 03:25:09.778927187 +0000 UTC m=+17.167083813" watchObservedRunningTime="2025-12-19 03:25:09.790876249 +0000 UTC m=+17.179032877"
	Dec 19 03:25:09 newest-cni-837172 kubelet[674]: I1219 03:25:09.791030     674 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-auth-657c9898c4-5dtgw" podStartSLOduration=9.188099307 podStartE2EDuration="11.791021669s" podCreationTimestamp="2025-12-19 03:24:58 +0000 UTC" firstStartedPulling="2025-12-19 03:25:07.055656771 +0000 UTC m=+14.443813387" lastFinishedPulling="2025-12-19 03:25:09.658579125 +0000 UTC m=+17.046735749" observedRunningTime="2025-12-19 03:25:09.79036217 +0000 UTC m=+17.178518796" watchObservedRunningTime="2025-12-19 03:25:09.791021669 +0000 UTC m=+17.179178296"
	Dec 19 03:25:14 newest-cni-837172 kubelet[674]: E1219 03:25:14.783826     674 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-25khp" containerName="proxy"
	Dec 19 03:25:16 newest-cni-837172 kubelet[674]: I1219 03:25:16.131737     674 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:25:16 newest-cni-837172 kubelet[674]: I1219 03:25:16.131824     674 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"8","ephemeral-storage":"304681132Ki","hugepages-1Gi":"0","hugepages-2Mi":"0","memory":"32863360Ki","pods":"110"}
	Dec 19 03:25:16 newest-cni-837172 kubelet[674]: E1219 03:25:16.791409     674 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-25khp" containerName="proxy"
	Dec 19 03:25:16 newest-cni-837172 kubelet[674]: I1219 03:25:16.816435     674 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-web-7f7574785f-h5czz" podStartSLOduration=9.743654321 podStartE2EDuration="18.816413772s" podCreationTimestamp="2025-12-19 03:24:58 +0000 UTC" firstStartedPulling="2025-12-19 03:25:07.05843105 +0000 UTC m=+14.446587657" lastFinishedPulling="2025-12-19 03:25:16.131190497 +0000 UTC m=+23.519347108" observedRunningTime="2025-12-19 03:25:16.81595879 +0000 UTC m=+24.204115417" watchObservedRunningTime="2025-12-19 03:25:16.816413772 +0000 UTC m=+24.204570402"
	Dec 19 03:25:16 newest-cni-837172 kubelet[674]: I1219 03:25:16.816574     674 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-25khp" podStartSLOduration=11.088893528 podStartE2EDuration="18.816564642s" podCreationTimestamp="2025-12-19 03:24:58 +0000 UTC" firstStartedPulling="2025-12-19 03:25:07.058275106 +0000 UTC m=+14.446431712" lastFinishedPulling="2025-12-19 03:25:14.785946217 +0000 UTC m=+22.174102826" observedRunningTime="2025-12-19 03:25:16.806276321 +0000 UTC m=+24.194433014" watchObservedRunningTime="2025-12-19 03:25:16.816564642 +0000 UTC m=+24.204721270"
	Dec 19 03:25:17 newest-cni-837172 kubelet[674]: E1219 03:25:17.795616     674 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-25khp" containerName="proxy"
	Dec 19 03:25:18 newest-cni-837172 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 19 03:25:18 newest-cni-837172 kubelet[674]: I1219 03:25:18.219378     674 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 19 03:25:18 newest-cni-837172 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 19 03:25:18 newest-cni-837172 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 19 03:25:18 newest-cni-837172 systemd[1]: kubelet.service: Consumed 1.437s CPU time.
	
	
	==> kubernetes-dashboard [31d0c579886a60b8b9f5ac47b7e2223c4edc5ba9511956f59b46b4e6e8a493b5] <==
	I1219 03:25:09.785271       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:25:09.785340       1 init.go:49] Using in-cluster config
	I1219 03:25:09.785458       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [4121c422fd7f07082964172b84890a33d2bd911602a06e1d0b279ae6a4604884] <==
	I1219 03:25:16.249912       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:25:16.249964       1 init.go:48] Using in-cluster config
	I1219 03:25:16.250124       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [e9b169e38837e0e14dc98c97fe6c76f4950aa4f2569e237239b942cffc704403] <==
	10.42.0.1 - - [19/Dec/2025:03:25:08 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	I1219 03:25:07.961620       1 main.go:43] "Starting Metrics Scraper" version="1.2.2"
	W1219 03:25:07.961690       1 client_config.go:667] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I1219 03:25:07.961838       1 main.go:51] Kubernetes host: https://10.96.0.1:443
	I1219 03:25:07.961850       1 main.go:52] Namespace(s): []
	
	
	==> kubernetes-dashboard [f35c30abddd500953c627635a6009dfd10e461179831ce083750a776a74432d0] <==
	I1219 03:25:08.887252       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:25:08.887322       1 init.go:49] Using in-cluster config
	I1219 03:25:08.887553       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:25:08.887567       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:25:08.887573       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:25:08.887579       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:25:08.894002       1 main.go:119] "Successful initial request to the apiserver" version="v1.35.0-rc.1"
	I1219 03:25:08.894029       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:25:08.899584       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	I1219 03:25:08.904271       1 manager.go:101] Successful request to sidecar
	
	
	==> storage-provisioner [d0c550beeeb6567d0a1c34c8c5920c0f2dd33b9ad81073dd94e7b76aaff5887a] <==
	I1219 03:25:07.098379       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1219 03:25:07.100956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:07.105386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1219 03:25:07.106406       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1219 03:25:07.106494       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"66bc8803-98fb-4cc5-b38a-9ba3185661dc", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-837172_d54e61ba-f8df-4afd-a954-60be7880f59b became leader
	I1219 03:25:07.106584       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-837172_d54e61ba-f8df-4afd-a954-60be7880f59b!
	W1219 03:25:07.109354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:07.112663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1219 03:25:07.207772       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-837172_d54e61ba-f8df-4afd-a954-60be7880f59b!
	W1219 03:25:09.116645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:09.120909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:11.124826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:11.134636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:13.138431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:13.143008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:15.146121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:15.151340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:17.157314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:17.161181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:19.164435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:19.168673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:21.171645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:21.175779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:23.178867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:25:23.182550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-837172 -n newest-cni-837172
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-837172 -n newest-cni-837172: exit status 2 (328.998607ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-837172 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.35s)

                                                
                                    

Test pass (345/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.17
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.3/json-events 3.12
13 TestDownloadOnly/v1.34.3/preload-exists 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.07
18 TestDownloadOnly/v1.34.3/DeleteAll 0.23
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-rc.1/json-events 3.44
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.23
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.14
29 TestDownloadOnlyKic 0.4
30 TestBinaryMirror 0.81
31 TestOffline 54.46
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 93.56
40 TestAddons/serial/GCPAuth/Namespaces 0.22
41 TestAddons/serial/GCPAuth/FakeCredentials 9.53
57 TestAddons/StoppedEnableDisable 16.83
58 TestCertOptions 33.11
59 TestCertExpiration 217.94
61 TestForceSystemdFlag 29.47
62 TestForceSystemdEnv 36.69
67 TestErrorSpam/setup 21.91
68 TestErrorSpam/start 0.64
69 TestErrorSpam/status 0.95
70 TestErrorSpam/pause 6.47
71 TestErrorSpam/unpause 5.96
72 TestErrorSpam/stop 8.11
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 41.93
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 6.25
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.11
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.52
84 TestFunctional/serial/CacheCmd/cache/add_local 0.92
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.56
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 66.62
93 TestFunctional/serial/ComponentHealth 0.06
94 TestFunctional/serial/LogsCmd 1.19
95 TestFunctional/serial/LogsFileCmd 1.19
96 TestFunctional/serial/InvalidService 4
98 TestFunctional/parallel/ConfigCmd 0.45
100 TestFunctional/parallel/DryRun 0.52
101 TestFunctional/parallel/InternationalLanguage 0.23
102 TestFunctional/parallel/StatusCmd 0.99
106 TestFunctional/parallel/ServiceCmdConnect 7.79
107 TestFunctional/parallel/AddonsCmd 0.14
108 TestFunctional/parallel/PersistentVolumeClaim 24.81
110 TestFunctional/parallel/SSHCmd 0.59
111 TestFunctional/parallel/CpCmd 1.92
112 TestFunctional/parallel/MySQL 21.39
113 TestFunctional/parallel/FileSync 0.27
114 TestFunctional/parallel/CertSync 1.88
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
122 TestFunctional/parallel/License 0.25
123 TestFunctional/parallel/Version/short 0.08
124 TestFunctional/parallel/Version/components 0.57
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.38
129 TestFunctional/parallel/ImageCommands/ImageBuild 4.65
130 TestFunctional/parallel/ImageCommands/Setup 0.43
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.33
134 TestFunctional/parallel/ServiceCmd/DeployApp 8.17
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.25
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.91
138 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.42
139 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.64
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.21
143 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
144 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
145 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.58
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
147 TestFunctional/parallel/ServiceCmd/List 0.51
148 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
149 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
150 TestFunctional/parallel/ServiceCmd/Format 0.35
151 TestFunctional/parallel/ServiceCmd/URL 0.35
152 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
153 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
157 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
158 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
159 TestFunctional/parallel/ProfileCmd/profile_list 0.44
160 TestFunctional/parallel/MountCmd/any-port 14.19
161 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
162 TestFunctional/parallel/MountCmd/specific-port 2.09
163 TestFunctional/parallel/MountCmd/VerifyCleanup 2.15
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 36.52
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 6.4
174 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.05
175 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 2.63
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 0.85
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.07
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.3
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.55
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 0.11
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 45.83
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 0.06
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 1.19
190 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 1.23
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 4.09
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.46
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.41
196 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.18
197 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 1.01
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 7.52
202 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.14
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 26.1
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.6
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 1.91
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 24.75
208 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.28
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.86
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 0.08
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.67
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.27
218 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.06
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.61
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.39
221 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.29
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.99
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.45
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 5.18
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.16
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.32
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.17
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.31
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.17
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 7.16
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 0.94
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.42
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 1
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup 7.21
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.96
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.51
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 0.61
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.39
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 0.5
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 0.51
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.35
245 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.37
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.35
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 0
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.11
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.44
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.46
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 7.26
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.41
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 2.14
258 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 1.93
259 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.04
260 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 107.79
266 TestMultiControlPlane/serial/DeployApp 3.81
267 TestMultiControlPlane/serial/PingHostFromPods 1.06
268 TestMultiControlPlane/serial/AddWorkerNode 24.28
269 TestMultiControlPlane/serial/NodeLabels 0.06
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
271 TestMultiControlPlane/serial/CopyFile 17.47
272 TestMultiControlPlane/serial/StopSecondaryNode 19.79
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
274 TestMultiControlPlane/serial/RestartSecondaryNode 8.71
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.91
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 103.94
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.67
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.71
279 TestMultiControlPlane/serial/StopCluster 42.33
280 TestMultiControlPlane/serial/RestartCluster 53.33
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
282 TestMultiControlPlane/serial/AddSecondaryNode 44.69
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
288 TestJSONOutput/start/Command 42.08
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 6.06
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.23
313 TestKicCustomNetwork/create_custom_network 26.71
314 TestKicCustomNetwork/use_default_bridge_network 24.86
315 TestKicExistingNetwork 25.31
316 TestKicCustomSubnet 24.52
317 TestKicStaticIP 25.34
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 50.69
322 TestMountStart/serial/StartWithMountFirst 7.75
323 TestMountStart/serial/VerifyMountFirst 0.28
324 TestMountStart/serial/StartWithMountSecond 7.88
325 TestMountStart/serial/VerifyMountSecond 0.29
326 TestMountStart/serial/DeleteFirst 1.67
327 TestMountStart/serial/VerifyMountPostDelete 0.28
328 TestMountStart/serial/Stop 1.25
329 TestMountStart/serial/RestartStopped 7.32
330 TestMountStart/serial/VerifyMountPostStop 0.29
333 TestMultiNode/serial/FreshStart2Nodes 67.93
334 TestMultiNode/serial/DeployApp2Nodes 4.02
335 TestMultiNode/serial/PingHostFrom2Pods 0.73
336 TestMultiNode/serial/AddNode 27.11
337 TestMultiNode/serial/MultiNodeLabels 0.06
338 TestMultiNode/serial/ProfileList 0.68
339 TestMultiNode/serial/CopyFile 10.07
340 TestMultiNode/serial/StopNode 2.26
341 TestMultiNode/serial/StartAfterStop 7.22
342 TestMultiNode/serial/RestartKeepsNodes 79.78
343 TestMultiNode/serial/DeleteNode 5.27
344 TestMultiNode/serial/StopMultiNode 30.36
345 TestMultiNode/serial/RestartMultiNode 50.82
346 TestMultiNode/serial/ValidateNameConflict 22.63
351 TestPreload 104.36
353 TestScheduledStopUnix 98.57
356 TestInsufficientStorage 9.01
357 TestRunningBinaryUpgrade 68.94
359 TestKubernetesUpgrade 304.57
360 TestMissingContainerUpgrade 64.28
362 TestPause/serial/Start 47.66
363 TestPause/serial/SecondStartNoReconfiguration 6.03
366 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
367 TestNoKubernetes/serial/StartWithK8s 25.74
375 TestNetworkPlugins/group/false 5.26
379 TestNoKubernetes/serial/StartWithStopK8s 23.86
380 TestNoKubernetes/serial/Start 9.44
381 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
382 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
383 TestNoKubernetes/serial/ProfileList 3.73
384 TestNoKubernetes/serial/Stop 1.33
385 TestNoKubernetes/serial/StartNoArgs 6.9
386 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
387 TestStoppedBinaryUpgrade/Setup 0.79
388 TestStoppedBinaryUpgrade/Upgrade 285.16
396 TestNetworkPlugins/group/auto/Start 41.44
397 TestNetworkPlugins/group/auto/KubeletFlags 0.29
398 TestNetworkPlugins/group/auto/NetCatPod 8.2
399 TestNetworkPlugins/group/auto/DNS 0.15
400 TestNetworkPlugins/group/auto/Localhost 0.1
401 TestNetworkPlugins/group/auto/HairPin 0.09
402 TestNetworkPlugins/group/flannel/Start 51.39
403 TestNetworkPlugins/group/kindnet/Start 43.53
404 TestNetworkPlugins/group/flannel/ControllerPod 6.01
405 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
406 TestNetworkPlugins/group/flannel/NetCatPod 8.21
407 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
408 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
409 TestNetworkPlugins/group/kindnet/NetCatPod 9.17
410 TestNetworkPlugins/group/flannel/DNS 0.12
411 TestNetworkPlugins/group/flannel/Localhost 0.09
412 TestNetworkPlugins/group/flannel/HairPin 0.09
413 TestNetworkPlugins/group/kindnet/DNS 0.11
414 TestNetworkPlugins/group/kindnet/Localhost 0.09
415 TestNetworkPlugins/group/kindnet/HairPin 0.1
416 TestNetworkPlugins/group/enable-default-cni/Start 69.87
417 TestNetworkPlugins/group/bridge/Start 71.35
418 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
419 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.19
420 TestNetworkPlugins/group/enable-default-cni/DNS 0.11
421 TestNetworkPlugins/group/enable-default-cni/Localhost 0.08
422 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
423 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
424 TestNetworkPlugins/group/bridge/NetCatPod 9.18
425 TestNetworkPlugins/group/bridge/DNS 0.15
426 TestNetworkPlugins/group/bridge/Localhost 0.11
427 TestNetworkPlugins/group/bridge/HairPin 0.11
428 TestNetworkPlugins/group/calico/Start 53.54
429 TestNetworkPlugins/group/custom-flannel/Start 50.65
430 TestStoppedBinaryUpgrade/MinikubeLogs 1.37
432 TestStartStop/group/old-k8s-version/serial/FirstStart 55.07
434 TestStartStop/group/no-preload/serial/FirstStart 52.99
435 TestNetworkPlugins/group/calico/ControllerPod 6.01
436 TestNetworkPlugins/group/calico/KubeletFlags 0.3
437 TestNetworkPlugins/group/calico/NetCatPod 10.2
438 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
439 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.21
440 TestNetworkPlugins/group/calico/DNS 0.16
441 TestNetworkPlugins/group/calico/Localhost 0.14
442 TestNetworkPlugins/group/calico/HairPin 0.15
443 TestNetworkPlugins/group/custom-flannel/DNS 0.15
444 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
445 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
446 TestStartStop/group/old-k8s-version/serial/DeployApp 9.31
447 TestStartStop/group/no-preload/serial/DeployApp 8.27
449 TestStartStop/group/embed-certs/serial/FirstStart 40.71
453 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.5
454 TestStartStop/group/old-k8s-version/serial/Stop 16.11
455 TestStartStop/group/no-preload/serial/Stop 16.34
456 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.27
457 TestStartStop/group/old-k8s-version/serial/SecondStart 46.29
458 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
459 TestStartStop/group/no-preload/serial/SecondStart 49.16
460 TestStartStop/group/embed-certs/serial/DeployApp 8.29
461 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.4
463 TestStartStop/group/embed-certs/serial/Stop 16.56
465 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.8
466 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
467 TestStartStop/group/embed-certs/serial/SecondStart 46.66
470 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.33
471 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 46.21
478 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
480 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
483 TestStartStop/group/newest-cni/serial/FirstStart 24.41
484 TestStartStop/group/newest-cni/serial/DeployApp 0
486 TestStartStop/group/newest-cni/serial/Stop 18.29
487 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
489 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
491 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
492 TestStartStop/group/newest-cni/serial/SecondStart 31.43
493 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
494 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
495 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
x
+
TestDownloadOnly/v1.28.0/json-events (4.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-940312 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-940312 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.166809594s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1219 02:24:54.483155    8536 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1219 02:24:54.483231    8536 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-940312
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-940312: exit status 85 (70.802517ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-940312 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-940312 │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:24:50
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:24:50.369324    8549 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:24:50.370013    8549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:24:50.370025    8549 out.go:374] Setting ErrFile to fd 2...
	I1219 02:24:50.370031    8549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:24:50.370263    8549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	W1219 02:24:50.370404    8549 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22230-4987/.minikube/config/config.json: open /home/jenkins/minikube-integration/22230-4987/.minikube/config/config.json: no such file or directory
	I1219 02:24:50.370909    8549 out.go:368] Setting JSON to true
	I1219 02:24:50.371833    8549 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":441,"bootTime":1766110649,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:24:50.371891    8549 start.go:143] virtualization: kvm guest
	I1219 02:24:50.375740    8549 out.go:99] [download-only-940312] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:24:50.375883    8549 notify.go:221] Checking for updates...
	W1219 02:24:50.375846    8549 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball: no such file or directory
	I1219 02:24:50.377182    8549 out.go:171] MINIKUBE_LOCATION=22230
	I1219 02:24:50.378449    8549 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:24:50.379556    8549 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 02:24:50.380668    8549 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 02:24:50.381770    8549 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1219 02:24:50.384185    8549 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1219 02:24:50.384461    8549 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:24:50.409684    8549 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:24:50.409791    8549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:24:50.626139    8549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-19 02:24:50.615491972 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:24:50.626257    8549 docker.go:319] overlay module found
	I1219 02:24:50.627926    8549 out.go:99] Using the docker driver based on user configuration
	I1219 02:24:50.627960    8549 start.go:309] selected driver: docker
	I1219 02:24:50.627969    8549 start.go:928] validating driver "docker" against <nil>
	I1219 02:24:50.628074    8549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:24:50.687417    8549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-19 02:24:50.678394053 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:24:50.687575    8549 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 02:24:50.688126    8549 start_flags.go:411] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1219 02:24:50.688279    8549 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1219 02:24:50.690078    8549 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-940312 host does not exist
	  To start a cluster, run: "minikube start -p download-only-940312"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-940312
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (3.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-516964 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-516964 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.121454424s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (3.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1219 02:24:58.042387    8536 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
I1219 02:24:58.042433    8536 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-516964
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-516964: exit status 85 (71.115497ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-940312 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-940312 │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │ 19 Dec 25 02:24 UTC │
	│ delete  │ -p download-only-940312                                                                                                                                                   │ download-only-940312 │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │ 19 Dec 25 02:24 UTC │
	│ start   │ -o=json --download-only -p download-only-516964 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-516964 │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:24:54
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:24:54.972441    8910 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:24:54.973150    8910 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:24:54.973162    8910 out.go:374] Setting ErrFile to fd 2...
	I1219 02:24:54.973169    8910 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:24:54.973378    8910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:24:54.973867    8910 out.go:368] Setting JSON to true
	I1219 02:24:54.974637    8910 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":446,"bootTime":1766110649,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:24:54.974691    8910 start.go:143] virtualization: kvm guest
	I1219 02:24:54.976601    8910 out.go:99] [download-only-516964] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:24:54.976814    8910 notify.go:221] Checking for updates...
	I1219 02:24:54.978285    8910 out.go:171] MINIKUBE_LOCATION=22230
	I1219 02:24:54.979672    8910 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:24:54.980942    8910 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 02:24:54.982221    8910 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 02:24:54.983499    8910 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1219 02:24:54.985821    8910 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1219 02:24:54.986061    8910 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:24:55.010308    8910 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:24:55.010422    8910 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:24:55.066199    8910 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-19 02:24:55.056833782 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:24:55.066390    8910 docker.go:319] overlay module found
	I1219 02:24:55.068227    8910 out.go:99] Using the docker driver based on user configuration
	I1219 02:24:55.068263    8910 start.go:309] selected driver: docker
	I1219 02:24:55.068272    8910 start.go:928] validating driver "docker" against <nil>
	I1219 02:24:55.068355    8910 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:24:55.122332    8910 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-19 02:24:55.112788588 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:24:55.122492    8910 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 02:24:55.123005    8910 start_flags.go:411] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1219 02:24:55.123211    8910 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1219 02:24:55.124989    8910 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-516964 host does not exist
	  To start a cluster, run: "minikube start -p download-only-516964"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-516964
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (3.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-494334 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-494334 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.44011014s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (3.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1219 02:25:01.926111    8536 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
I1219 02:25:01.926152    8536 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-494334
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-494334: exit status 85 (69.913678ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-940312 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio      │ download-only-940312 │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │ 19 Dec 25 02:24 UTC │
	│ delete  │ -p download-only-940312                                                                                                                                                        │ download-only-940312 │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │ 19 Dec 25 02:24 UTC │
	│ start   │ -o=json --download-only -p download-only-516964 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio      │ download-only-516964 │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │ 19 Dec 25 02:24 UTC │
	│ delete  │ -p download-only-516964                                                                                                                                                        │ download-only-516964 │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │ 19 Dec 25 02:24 UTC │
	│ start   │ -o=json --download-only -p download-only-494334 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-494334 │ jenkins │ v1.37.0 │ 19 Dec 25 02:24 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:24:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:24:58.535772    9251 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:24:58.535978    9251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:24:58.535986    9251 out.go:374] Setting ErrFile to fd 2...
	I1219 02:24:58.535990    9251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:24:58.536176    9251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:24:58.536604    9251 out.go:368] Setting JSON to true
	I1219 02:24:58.537378    9251 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":450,"bootTime":1766110649,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:24:58.537431    9251 start.go:143] virtualization: kvm guest
	I1219 02:24:58.539401    9251 out.go:99] [download-only-494334] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:24:58.539556    9251 notify.go:221] Checking for updates...
	I1219 02:24:58.540640    9251 out.go:171] MINIKUBE_LOCATION=22230
	I1219 02:24:58.542043    9251 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:24:58.543402    9251 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 02:24:58.548248    9251 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 02:24:58.549696    9251 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1219 02:24:58.552224    9251 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1219 02:24:58.552475    9251 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:24:58.576677    9251 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:24:58.576774    9251 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:24:58.631601    9251 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-19 02:24:58.62225489 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:24:58.631731    9251 docker.go:319] overlay module found
	I1219 02:24:58.633505    9251 out.go:99] Using the docker driver based on user configuration
	I1219 02:24:58.633533    9251 start.go:309] selected driver: docker
	I1219 02:24:58.633541    9251 start.go:928] validating driver "docker" against <nil>
	I1219 02:24:58.633616    9251 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:24:58.689353    9251 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-19 02:24:58.679784829 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:24:58.689497    9251 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 02:24:58.689970    9251 start_flags.go:411] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1219 02:24:58.690096    9251 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1219 02:24:58.691865    9251 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-494334 host does not exist
	  To start a cluster, run: "minikube start -p download-only-494334"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-494334
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-321917 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-321917" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-321917
--- PASS: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestBinaryMirror (0.81s)

                                                
                                                
=== RUN   TestBinaryMirror
I1219 02:25:03.193600    8536 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-072289 --alsologtostderr --binary-mirror http://127.0.0.1:46753 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-072289" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-072289
--- PASS: TestBinaryMirror (0.81s)

                                                
                                    
x
+
TestOffline (54.46s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-172724 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-172724 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (51.964803244s)
helpers_test.go:176: Cleaning up "offline-crio-172724" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-172724
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-172724: (2.496837625s)
--- PASS: TestOffline (54.46s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-791857
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-791857: exit status 85 (64.479998ms)

                                                
                                                
-- stdout --
	* Profile "addons-791857" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-791857"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-791857
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-791857: exit status 85 (64.008686ms)

                                                
                                                
-- stdout --
	* Profile "addons-791857" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-791857"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (93.56s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-791857 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-791857 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m33.56087301s)
--- PASS: TestAddons/Setup (93.56s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-791857 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-791857 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-791857 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-791857 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [3fb02b19-4a11-4f81-8f32-b9969dbce522] Pending
helpers_test.go:353: "busybox" [3fb02b19-4a11-4f81-8f32-b9969dbce522] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [3fb02b19-4a11-4f81-8f32-b9969dbce522] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003984246s
addons_test.go:696: (dbg) Run:  kubectl --context addons-791857 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-791857 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-791857 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.53s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.83s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-791857
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-791857: (16.499044105s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-791857
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-791857
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-791857
--- PASS: TestAddons/StoppedEnableDisable (16.83s)

                                                
                                    
x
+
TestCertOptions (33.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-351999 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-351999 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (26.546612632s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-351999 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-351999 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-351999 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-351999" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-351999
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-351999: (5.787076633s)
--- PASS: TestCertOptions (33.11s)

                                                
                                    
x
+
TestCertExpiration (217.94s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-254196 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-254196 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (30.067105468s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-254196 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-254196 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.451321249s)
helpers_test.go:176: Cleaning up "cert-expiration-254196" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-254196
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-254196: (2.423101328s)
--- PASS: TestCertExpiration (217.94s)

                                                
                                    
x
+
TestForceSystemdFlag (29.47s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-675485 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-675485 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.6529783s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-675485 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-675485" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-675485
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-675485: (2.505746506s)
--- PASS: TestForceSystemdFlag (29.47s)

                                                
                                    
x
+
TestForceSystemdEnv (36.69s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-215639 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-215639 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.703375022s)
helpers_test.go:176: Cleaning up "force-systemd-env-215639" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-215639
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-215639: (2.988598543s)
--- PASS: TestForceSystemdEnv (36.69s)

                                                
                                    
x
+
TestErrorSpam/setup (21.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-691884 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-691884 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-691884 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-691884 --driver=docker  --container-runtime=crio: (21.905182508s)
--- PASS: TestErrorSpam/setup (21.91s)

                                                
                                    
x
+
TestErrorSpam/start (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 start --dry-run
--- PASS: TestErrorSpam/start (0.64s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (6.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 pause: exit status 80 (2.452913556s)

                                                
                                                
-- stdout --
	* Pausing node nospam-691884 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:30:11Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 pause: exit status 80 (2.060547994s)

                                                
                                                
-- stdout --
	* Pausing node nospam-691884 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:30:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 pause: exit status 80 (1.960169682s)

                                                
                                                
-- stdout --
	* Pausing node nospam-691884 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:30:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.96s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 unpause: exit status 80 (1.761553979s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-691884 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:30:16Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 unpause: exit status 80 (2.237081907s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-691884 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:30:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 unpause: exit status 80 (1.958832952s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-691884 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T02:30:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.96s)

                                                
                                    
x
+
TestErrorSpam/stop (8.11s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 stop: (7.902914124s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-691884 --log_dir /tmp/nospam-691884 stop
--- PASS: TestErrorSpam/stop (8.11s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/test/nested/copy/8536/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.93s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-736733 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-736733 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (41.932897949s)
--- PASS: TestFunctional/serial/StartWithProxy (41.93s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1219 02:31:16.629542    8536 config.go:182] Loaded profile config "functional-736733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-736733 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-736733 --alsologtostderr -v=8: (6.251207072s)
functional_test.go:678: soft start took 6.251925666s for "functional-736733" cluster.
I1219 02:31:22.881106    8536 config.go:182] Loaded profile config "functional-736733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (6.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-736733 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-736733 /tmp/TestFunctionalserialCacheCmdcacheadd_local3643688526/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 cache add minikube-local-cache-test:functional-736733
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 cache delete minikube-local-cache-test:functional-736733
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-736733
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736733 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (290.420636ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 kubectl -- --context functional-736733 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-736733 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (66.62s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-736733 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1219 02:31:38.531015    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:31:38.536329    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:31:38.546654    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:31:38.567044    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:31:38.607364    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:31:38.687782    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:31:38.848230    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:31:39.168866    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:31:39.809823    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:31:41.090642    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:31:43.652437    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:31:48.772769    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:31:59.013162    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:32:19.493956    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-736733 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m6.615600773s)
functional_test.go:776: restart took 1m6.615714936s for "functional-736733" cluster.
I1219 02:32:35.424906    8536 config.go:182] Loaded profile config "functional-736733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (66.62s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-736733 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-736733 logs: (1.189346404s)
--- PASS: TestFunctional/serial/LogsCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 logs --file /tmp/TestFunctionalserialLogsFileCmd1808881217/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-736733 logs --file /tmp/TestFunctionalserialLogsFileCmd1808881217/001/logs.txt: (1.190157509s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-736733 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-736733
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-736733: exit status 115 (351.471422ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30243 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-736733 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736733 config get cpus: exit status 14 (84.559065ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736733 config get cpus: exit status 14 (76.916589ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-736733 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-736733 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (214.153505ms)

                                                
                                                
-- stdout --
	* [functional-736733] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:32:56.167807   46463 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:32:56.168170   46463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:32:56.168198   46463 out.go:374] Setting ErrFile to fd 2...
	I1219 02:32:56.168214   46463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:32:56.168536   46463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:32:56.169239   46463 out.go:368] Setting JSON to false
	I1219 02:32:56.170896   46463 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":927,"bootTime":1766110649,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:32:56.171071   46463 start.go:143] virtualization: kvm guest
	I1219 02:32:56.174005   46463 out.go:179] * [functional-736733] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:32:56.175573   46463 notify.go:221] Checking for updates...
	I1219 02:32:56.176930   46463 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:32:56.178918   46463 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:32:56.180183   46463 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 02:32:56.181451   46463 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 02:32:56.182935   46463 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:32:56.184120   46463 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:32:56.186285   46463 config.go:182] Loaded profile config "functional-736733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:32:56.187349   46463 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:32:56.219228   46463 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:32:56.219368   46463 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:32:56.299159   46463 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-19 02:32:56.28448634 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:32:56.299313   46463 docker.go:319] overlay module found
	I1219 02:32:56.300966   46463 out.go:179] * Using the docker driver based on existing profile
	I1219 02:32:56.302217   46463 start.go:309] selected driver: docker
	I1219 02:32:56.302236   46463 start.go:928] validating driver "docker" against &{Name:functional-736733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-736733 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:32:56.302400   46463 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:32:56.304234   46463 out.go:203] 
	W1219 02:32:56.305489   46463 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1219 02:32:56.306560   46463 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-736733 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-736733 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-736733 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (226.106798ms)

                                                
                                                
-- stdout --
	* [functional-736733] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:32:56.695053   46688 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:32:56.695165   46688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:32:56.695172   46688 out.go:374] Setting ErrFile to fd 2...
	I1219 02:32:56.695179   46688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:32:56.695558   46688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:32:56.696122   46688 out.go:368] Setting JSON to false
	I1219 02:32:56.697440   46688 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":928,"bootTime":1766110649,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:32:56.697521   46688 start.go:143] virtualization: kvm guest
	I1219 02:32:56.699323   46688 out.go:179] * [functional-736733] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1219 02:32:56.700970   46688 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:32:56.701009   46688 notify.go:221] Checking for updates...
	I1219 02:32:56.704165   46688 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:32:56.709341   46688 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 02:32:56.715929   46688 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 02:32:56.718306   46688 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:32:56.719963   46688 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:32:56.722067   46688 config.go:182] Loaded profile config "functional-736733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:32:56.722888   46688 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:32:56.754475   46688 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:32:56.754587   46688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:32:56.827693   46688 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-19 02:32:56.814283176 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:32:56.827868   46688 docker.go:319] overlay module found
	I1219 02:32:56.830771   46688 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1219 02:32:56.832133   46688 start.go:309] selected driver: docker
	I1219 02:32:56.832152   46688 start.go:928] validating driver "docker" against &{Name:functional-736733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-736733 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:32:56.832260   46688 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:32:56.834133   46688 out.go:203] 
	W1219 02:32:56.835331   46688 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1219 02:32:56.836657   46688 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-736733 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-736733 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-jncqv" [334df867-d3ac-42cf-872b-5b673af40232] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-jncqv" [334df867-d3ac-42cf-872b-5b673af40232] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004020547s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32071
functional_test.go:1680: http://192.168.49.2:32071: success! body:
Request served by hello-node-connect-7d85dfc575-jncqv

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32071
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.79s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [f760ecd5-fc52-4974-b54a-93838ec8cd8d] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003400792s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-736733 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-736733 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-736733 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-736733 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [8b915fe6-3d80-4504-ba9c-30ffce62554b] Pending
helpers_test.go:353: "sp-pod" [8b915fe6-3d80-4504-ba9c-30ffce62554b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [8b915fe6-3d80-4504-ba9c-30ffce62554b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004264336s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-736733 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-736733 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-736733 delete -f testdata/storage-provisioner/pod.yaml: (2.076038095s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-736733 apply -f testdata/storage-provisioner/pod.yaml
I1219 02:33:00.082997    8536 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [1cfc2147-fef5-4569-9fc4-6a70b21ff1a1] Pending
E1219 02:33:00.454494    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "sp-pod" [1cfc2147-fef5-4569-9fc4-6a70b21ff1a1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [1cfc2147-fef5-4569-9fc4-6a70b21ff1a1] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003735308s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-736733 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.81s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh -n functional-736733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 cp functional-736733:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd513927644/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh -n functional-736733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh -n functional-736733 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-736733 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-8hsbz" [06eb4c20-a620-48d7-938e-ae79d7b3a7bd] Pending
helpers_test.go:353: "mysql-6bcdcbc558-8hsbz" [06eb4c20-a620-48d7-938e-ae79d7b3a7bd] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-8hsbz" [06eb4c20-a620-48d7-938e-ae79d7b3a7bd] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.003993396s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-736733 exec mysql-6bcdcbc558-8hsbz -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-736733 exec mysql-6bcdcbc558-8hsbz -- mysql -ppassword -e "show databases;": exit status 1 (103.772566ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:33:09.910187    8536 retry.go:31] will retry after 744.25288ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-736733 exec mysql-6bcdcbc558-8hsbz -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-736733 exec mysql-6bcdcbc558-8hsbz -- mysql -ppassword -e "show databases;": exit status 1 (151.369243ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:33:10.806639    8536 retry.go:31] will retry after 1.614244232s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-736733 exec mysql-6bcdcbc558-8hsbz -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-736733 exec mysql-6bcdcbc558-8hsbz -- mysql -ppassword -e "show databases;": exit status 1 (154.817564ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:33:12.576122    8536 retry.go:31] will retry after 3.331166887s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-736733 exec mysql-6bcdcbc558-8hsbz -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/8536/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh "sudo cat /etc/test/nested/copy/8536/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/8536.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh "sudo cat /etc/ssl/certs/8536.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/8536.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh "sudo cat /usr/share/ca-certificates/8536.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/85362.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh "sudo cat /etc/ssl/certs/85362.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/85362.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh "sudo cat /usr/share/ca-certificates/85362.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-736733 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736733 ssh "sudo systemctl is-active docker": exit status 1 (327.479163ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736733 ssh "sudo systemctl is-active containerd": exit status 1 (330.03979ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-736733 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-736733
localhost/kicbase/echo-server:functional-736733
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kubernetesui/dashboard-web:1.7.0
docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2
docker.io/kubernetesui/dashboard-auth:1.4.0
docker.io/kubernetesui/dashboard-api:1.14.0
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-736733 image ls --format short --alsologtostderr:
I1219 02:33:10.676507   49623 out.go:360] Setting OutFile to fd 1 ...
I1219 02:33:10.676875   49623 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:33:10.676889   49623 out.go:374] Setting ErrFile to fd 2...
I1219 02:33:10.676896   49623 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:33:10.677209   49623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
I1219 02:33:10.678236   49623 config.go:182] Loaded profile config "functional-736733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:33:10.678404   49623 config.go:182] Loaded profile config "functional-736733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:33:10.679059   49623 cli_runner.go:164] Run: docker container inspect functional-736733 --format={{.State.Status}}
I1219 02:33:10.712815   49623 ssh_runner.go:195] Run: systemctl --version
I1219 02:33:10.712875   49623 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-736733
I1219 02:33:10.742885   49623 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/functional-736733/id_rsa Username:docker}
I1219 02:33:10.858925   49623 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-736733 image ls --format table --alsologtostderr:
┌──────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                      IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├──────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ docker.io/kubernetesui/dashboard-auth            │ 1.4.0                                 │ dd54374d0ab14 │ 49.3MB │
│ docker.io/kubernetesui/dashboard-web             │ 1.7.0                                 │ 59f642f485d26 │ 193MB  │
│ gcr.io/k8s-minikube/busybox                      │ 1.28.4-glibc                          │ 56cc512116c8f │ 4.63MB │
│ public.ecr.aws/docker/library/mysql              │ 8.4                                   │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/kube-proxy                       │ v1.34.3                               │ 36eef8e07bdd6 │ 73.1MB │
│ registry.k8s.io/pause                            │ 3.1                                   │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kubernetesui/dashboard-metrics-scraper │ 1.2.2                                 │ d9cbc9f4053ca │ 38.9MB │
│ localhost/minikube-local-cache-test              │ functional-736733                     │ a382b7e788f77 │ 3.33kB │
│ registry.k8s.io/pause                            │ 3.10.1                                │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/etcd                             │ 3.6.5-0                               │ a3e246e9556e9 │ 63.6MB │
│ docker.io/kicbase/echo-server                    │ latest                                │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server                    │ functional-736733                     │ 9056ab77afb8e │ 4.95MB │
│ public.ecr.aws/nginx/nginx                       │ alpine                                │ 04da2b0513cd7 │ 55.2MB │
│ registry.k8s.io/coredns/coredns                  │ v1.12.1                               │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver                   │ v1.34.3                               │ aa27095f56193 │ 89.1MB │
│ registry.k8s.io/kube-controller-manager          │ v1.34.3                               │ 5826b25d990d7 │ 76MB   │
│ registry.k8s.io/kube-scheduler                   │ v1.34.3                               │ aec12dadf56dd │ 53.9MB │
│ registry.k8s.io/pause                            │ 3.3                                   │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd                       │ v20250512-df8de77b                    │ 409467f978b4a │ 109MB  │
│ docker.io/kubernetesui/dashboard-api             │ 1.14.0                                │ a0607af4fcd8a │ 55.2MB │
│ gcr.io/k8s-minikube/storage-provisioner          │ v5                                    │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                            │ latest                                │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd                       │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ 4921d7a6dffa9 │ 108MB  │
└──────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-736733 image ls --format table --alsologtostderr:
I1219 02:33:12.545447   50676 out.go:360] Setting OutFile to fd 1 ...
I1219 02:33:12.545655   50676 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:33:12.545782   50676 out.go:374] Setting ErrFile to fd 2...
I1219 02:33:12.545796   50676 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:33:12.548289   50676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
I1219 02:33:12.549660   50676 config.go:182] Loaded profile config "functional-736733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:33:12.549830   50676 config.go:182] Loaded profile config "functional-736733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:33:12.550565   50676 cli_runner.go:164] Run: docker container inspect functional-736733 --format={{.State.Status}}
I1219 02:33:12.576135   50676 ssh_runner.go:195] Run: systemctl --version
I1219 02:33:12.576194   50676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-736733
I1219 02:33:12.600249   50676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/functional-736733/id_rsa Username:docker}
I1219 02:33:12.712373   50676 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-736733 image ls --format json --alsologtostderr:
[{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1","repoDigests":["docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052","docker.io/kubernetesui/dashboard-auth@sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031
136e9eeff"],"repoTags":["docker.io/kubernetesui/dashboard-auth:1.4.0"],"size":"49315433"},{"id":"59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06","repoDigests":["docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30","docker.io/kubernetesui/dashboard-web@sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d"],"repoTags":["docker.io/kubernetesui/dashboard-web:1.7.0"],"size":"193323269"},{"id":"aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78","repoDigests":["registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9","registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"53853013"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc1935179
05e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae","docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"107598204"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],
"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDigests":["public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c","public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55157106"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/ec
ho-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-736733"],"size":"4945146"},{"id":"d9cbc9f4053ca11fa6edb814b512ab807a89346934700a3a425fc1e24a3f2167","repoDigests":["docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc","docker.io/kubernetesui/dashboard-metrics-scraper@sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775"],"repoTags":["docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2"]
,"size":"38883226"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a382b7e788f77dfd1bc9261dc52bad901db8c99031cc524bdb4aa5e5c1de02df","repoDigests":["localhost/minikube-local-cache-test@sha256:8e54c50b8765cec1b723f38f18f12f9b16304e47baba4f6bbc8682fb2663b346"],"repoTags":["localhost/minikube-local-cache-test:functional-736733"],"size":"3330"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/my
sql:8.4"],"size":"803724943"},{"id":"aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460","registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc94991e51c601ba6a00369b45064243ba7822143b286edb9d41f9e"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"89050097"},{"id":"a0607af4fcd8ae78708c5bd51f34ccf8442967b8e41f56f008cc2884690f2f3b","repoDigests":["docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2","docker.io/kubernetesui/dashboard-api@sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff"],"repoTags":["docker.io/kubernetesui/dashboard-api:1.14.0"],"size":"55164394"},{"id":"5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954","registry.k8s.io/kube-controlle
r-manager@sha256:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"76004183"},{"id":"36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691","repoDigests":["registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6","registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"73145241"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d066
50a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-736733 image ls --format json --alsologtostderr:
I1219 02:33:12.253298   50539 out.go:360] Setting OutFile to fd 1 ...
I1219 02:33:12.253420   50539 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:33:12.253430   50539 out.go:374] Setting ErrFile to fd 2...
I1219 02:33:12.253436   50539 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:33:12.253765   50539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
I1219 02:33:12.254537   50539 config.go:182] Loaded profile config "functional-736733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:33:12.254676   50539 config.go:182] Loaded profile config "functional-736733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:33:12.255336   50539 cli_runner.go:164] Run: docker container inspect functional-736733 --format={{.State.Status}}
I1219 02:33:12.282543   50539 ssh_runner.go:195] Run: systemctl --version
I1219 02:33:12.282600   50539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-736733
I1219 02:33:12.307574   50539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/functional-736733/id_rsa Username:docker}
I1219 02:33:12.421004   50539 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-736733 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
- docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "107598204"
- id: a0607af4fcd8ae78708c5bd51f34ccf8442967b8e41f56f008cc2884690f2f3b
repoDigests:
- docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2
- docker.io/kubernetesui/dashboard-api@sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff
repoTags:
- docker.io/kubernetesui/dashboard-api:1.14.0
size: "55164394"
- id: dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1
repoDigests:
- docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052
- docker.io/kubernetesui/dashboard-auth@sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff
repoTags:
- docker.io/kubernetesui/dashboard-auth:1.4.0
size: "49315433"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a382b7e788f77dfd1bc9261dc52bad901db8c99031cc524bdb4aa5e5c1de02df
repoDigests:
- localhost/minikube-local-cache-test@sha256:8e54c50b8765cec1b723f38f18f12f9b16304e47baba4f6bbc8682fb2663b346
repoTags:
- localhost/minikube-local-cache-test:functional-736733
size: "3330"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: d9cbc9f4053ca11fa6edb814b512ab807a89346934700a3a425fc1e24a3f2167
repoDigests:
- docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc
- docker.io/kubernetesui/dashboard-metrics-scraper@sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775
repoTags:
- docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2
size: "38883226"
- id: 59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06
repoDigests:
- docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30
- docker.io/kubernetesui/dashboard-web@sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d
repoTags:
- docker.io/kubernetesui/dashboard-web:1.7.0
size: "193323269"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-736733
size: "4945146"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: 5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954
- registry.k8s.io/kube-controller-manager@sha256:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "76004183"
- id: 04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55157106"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460
- registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc94991e51c601ba6a00369b45064243ba7822143b286edb9d41f9e
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "89050097"
- id: 36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691
repoDigests:
- registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6
- registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "73145241"
- id: aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9
- registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "53853013"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-736733 image ls --format yaml --alsologtostderr:
I1219 02:33:11.001132   49767 out.go:360] Setting OutFile to fd 1 ...
I1219 02:33:11.001243   49767 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:33:11.001250   49767 out.go:374] Setting ErrFile to fd 2...
I1219 02:33:11.001255   49767 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:33:11.001566   49767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
I1219 02:33:11.002437   49767 config.go:182] Loaded profile config "functional-736733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:33:11.002690   49767 config.go:182] Loaded profile config "functional-736733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:33:11.003355   49767 cli_runner.go:164] Run: docker container inspect functional-736733 --format={{.State.Status}}
I1219 02:33:11.032678   49767 ssh_runner.go:195] Run: systemctl --version
I1219 02:33:11.032785   49767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-736733
I1219 02:33:11.061548   49767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/functional-736733/id_rsa Username:docker}
I1219 02:33:11.174620   49767 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736733 ssh pgrep buildkitd: exit status 1 (353.981492ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 image build -t localhost/my-image:functional-736733 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-736733 image build -t localhost/my-image:functional-736733 testdata/build --alsologtostderr: (4.023901853s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-736733 image build -t localhost/my-image:functional-736733 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> cf9f2b67f81
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-736733
--> 6654b5a3bfb
Successfully tagged localhost/my-image:functional-736733
6654b5a3bfb92b7a0e75f6b1a1b9fb2141d984a5c1042b8f489384fa9102fce1
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-736733 image build -t localhost/my-image:functional-736733 testdata/build --alsologtostderr:
I1219 02:33:11.728397   50255 out.go:360] Setting OutFile to fd 1 ...
I1219 02:33:11.728685   50255 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:33:11.728697   50255 out.go:374] Setting ErrFile to fd 2...
I1219 02:33:11.728717   50255 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:33:11.728953   50255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
I1219 02:33:11.729548   50255 config.go:182] Loaded profile config "functional-736733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:33:11.730204   50255 config.go:182] Loaded profile config "functional-736733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1219 02:33:11.730643   50255 cli_runner.go:164] Run: docker container inspect functional-736733 --format={{.State.Status}}
I1219 02:33:11.753632   50255 ssh_runner.go:195] Run: systemctl --version
I1219 02:33:11.753748   50255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-736733
I1219 02:33:11.781068   50255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/functional-736733/id_rsa Username:docker}
I1219 02:33:11.905811   50255 build_images.go:162] Building image from path: /tmp/build.390619415.tar
I1219 02:33:11.905878   50255 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1219 02:33:11.919577   50255 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.390619415.tar
I1219 02:33:11.924498   50255 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.390619415.tar: stat -c "%s %y" /var/lib/minikube/build/build.390619415.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.390619415.tar': No such file or directory
I1219 02:33:11.924541   50255 ssh_runner.go:362] scp /tmp/build.390619415.tar --> /var/lib/minikube/build/build.390619415.tar (3072 bytes)
I1219 02:33:11.953171   50255 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.390619415
I1219 02:33:11.966154   50255 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.390619415 -xf /var/lib/minikube/build/build.390619415.tar
I1219 02:33:11.981199   50255 crio.go:315] Building image: /var/lib/minikube/build/build.390619415
I1219 02:33:11.981318   50255 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-736733 /var/lib/minikube/build/build.390619415 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1219 02:33:15.648993   50255 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-736733 /var/lib/minikube/build/build.390619415 --cgroup-manager=cgroupfs: (3.667646091s)
I1219 02:33:15.649070   50255 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.390619415
I1219 02:33:15.658443   50255 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.390619415.tar
I1219 02:33:15.666754   50255 build_images.go:218] Built localhost/my-image:functional-736733 from /tmp/build.390619415.tar
I1219 02:33:15.666791   50255 build_images.go:134] succeeded building to: functional-736733
I1219 02:33:15.666797   50255 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-736733
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-736733 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-736733 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-kxt8k" [90364bc5-b0c8-42c7-96a2-026770e59cc9] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-kxt8k" [90364bc5-b0c8-42c7-96a2-026770e59cc9] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.00381663s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 image load --daemon kicbase/echo-server:functional-736733 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 image load --daemon kicbase/echo-server:functional-736733 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-736733 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-736733 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-736733 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-736733 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 42778: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-736733 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-736733
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 image load --daemon kicbase/echo-server:functional-736733 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-736733 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [a06c5892-a099-4935-8cf6-2816794c479d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [a06c5892-a099-4935-8cf6-2816794c479d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003505946s
I1219 02:32:52.920261    8536 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 image save kicbase/echo-server:functional-736733 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 image rm kicbase/echo-server:functional-736733 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-736733
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 image save --daemon kicbase/echo-server:functional-736733 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-736733
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 service list
I1219 02:32:50.733509    8536 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 service list -o json
functional_test.go:1504: Took "496.004308ms" to run "out/minikube-linux-amd64 -p functional-736733 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30490
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30490
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-736733 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.99.6 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-736733 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "375.047284ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "67.617745ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (14.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-736733 /tmp/TestFunctionalparallelMountCmdany-port4211719905/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766111573744245530" to /tmp/TestFunctionalparallelMountCmdany-port4211719905/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766111573744245530" to /tmp/TestFunctionalparallelMountCmdany-port4211719905/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766111573744245530" to /tmp/TestFunctionalparallelMountCmdany-port4211719905/001/test-1766111573744245530
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736733 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (308.201601ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:32:54.052799    8536 retry.go:31] will retry after 651.935144ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 19 02:32 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 19 02:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 19 02:32 test-1766111573744245530
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh cat /mount-9p/test-1766111573744245530
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-736733 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [be820093-731a-4e77-9032-7a7d1a830c0b] Pending
helpers_test.go:353: "busybox-mount" [be820093-731a-4e77-9032-7a7d1a830c0b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [be820093-731a-4e77-9032-7a7d1a830c0b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [be820093-731a-4e77-9032-7a7d1a830c0b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 11.004797252s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-736733 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-736733 /tmp/TestFunctionalparallelMountCmdany-port4211719905/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (14.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "348.136975ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "59.533636ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-736733 /tmp/TestFunctionalparallelMountCmdspecific-port286714817/001:/mount-9p --alsologtostderr -v=1 --port 36533]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736733 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (345.492645ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:33:08.281625    8536 retry.go:31] will retry after 543.118191ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-736733 /tmp/TestFunctionalparallelMountCmdspecific-port286714817/001:/mount-9p --alsologtostderr -v=1 --port 36533] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736733 ssh "sudo umount -f /mount-9p": exit status 1 (314.313491ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-736733 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-736733 /tmp/TestFunctionalparallelMountCmdspecific-port286714817/001:/mount-9p --alsologtostderr -v=1 --port 36533] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-736733 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3724702152/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-736733 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3724702152/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-736733 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3724702152/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-736733 ssh "findmnt -T" /mount1: exit status 1 (551.085038ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:33:10.583601    8536 retry.go:31] will retry after 292.63453ms: exit status 1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-736733 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-736733 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-736733 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3724702152/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-736733 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3724702152/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-736733 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3724702152/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.15s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-736733
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-736733
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-736733
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22230-4987/.minikube/files/etc/test/nested/copy/8536/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (36.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-382801 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-382801 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (36.518622324s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (36.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (6.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1219 02:33:56.801516    8536 config.go:182] Loaded profile config "functional-382801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-382801 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-382801 --alsologtostderr -v=8: (6.396862504s)
functional_test.go:678: soft start took 6.397341928s for "functional-382801" cluster.
I1219 02:34:03.198892    8536 config.go:182] Loaded profile config "functional-382801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (6.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-382801 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (0.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-382801 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC2971123181/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 cache add minikube-local-cache-test:functional-382801
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 cache delete minikube-local-cache-test:functional-382801
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-382801
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (0.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-382801 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (288.48877ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 kubectl -- --context functional-382801 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-382801 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (45.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-382801 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1219 02:34:22.377822    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-382801 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.830948851s)
functional_test.go:776: restart took 45.831123631s for "functional-382801" cluster.
I1219 02:34:54.947514    8536 config.go:182] Loaded profile config "functional-382801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (45.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-382801 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-382801 logs: (1.194251734s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi977899963/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-382801 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi977899963/001/logs.txt: (1.224751308s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-382801 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-382801
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-382801: exit status 115 (344.108445ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30676 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-382801 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-382801 config get cpus: exit status 14 (82.813774ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-382801 config get cpus: exit status 14 (80.42293ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-382801 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-382801 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (174.772467ms)

                                                
                                                
-- stdout --
	* [functional-382801] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:35:13.375022   66955 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:35:13.375300   66955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:35:13.375311   66955 out.go:374] Setting ErrFile to fd 2...
	I1219 02:35:13.375315   66955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:35:13.375563   66955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:35:13.376041   66955 out.go:368] Setting JSON to false
	I1219 02:35:13.377158   66955 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1064,"bootTime":1766110649,"procs":263,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:35:13.377225   66955 start.go:143] virtualization: kvm guest
	I1219 02:35:13.379244   66955 out.go:179] * [functional-382801] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:35:13.380520   66955 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:35:13.380570   66955 notify.go:221] Checking for updates...
	I1219 02:35:13.382729   66955 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:35:13.383883   66955 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 02:35:13.385374   66955 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 02:35:13.386710   66955 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:35:13.388922   66955 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:35:13.390848   66955 config.go:182] Loaded profile config "functional-382801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 02:35:13.391579   66955 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:35:13.420751   66955 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:35:13.420864   66955 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:35:13.483142   66955 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-19 02:35:13.473341446 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:35:13.483233   66955 docker.go:319] overlay module found
	I1219 02:35:13.485329   66955 out.go:179] * Using the docker driver based on existing profile
	I1219 02:35:13.486556   66955 start.go:309] selected driver: docker
	I1219 02:35:13.486570   66955 start.go:928] validating driver "docker" against &{Name:functional-382801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-382801 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:35:13.486653   66955 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:35:13.488159   66955 out.go:203] 
	W1219 02:35:13.489156   66955 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1219 02:35:13.490137   66955 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-382801 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-382801 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-382801 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (174.897162ms)

                                                
                                                
-- stdout --
	* [functional-382801] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:35:13.795134   67328 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:35:13.795255   67328 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:35:13.795267   67328 out.go:374] Setting ErrFile to fd 2...
	I1219 02:35:13.795274   67328 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:35:13.795634   67328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:35:13.796127   67328 out.go:368] Setting JSON to false
	I1219 02:35:13.797228   67328 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1065,"bootTime":1766110649,"procs":263,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:35:13.797285   67328 start.go:143] virtualization: kvm guest
	I1219 02:35:13.799159   67328 out.go:179] * [functional-382801] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1219 02:35:13.800378   67328 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:35:13.800365   67328 notify.go:221] Checking for updates...
	I1219 02:35:13.802746   67328 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:35:13.804239   67328 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 02:35:13.805632   67328 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 02:35:13.807577   67328 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:35:13.808694   67328 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:35:13.810116   67328 config.go:182] Loaded profile config "functional-382801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1219 02:35:13.810744   67328 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:35:13.835634   67328 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:35:13.835775   67328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:35:13.889870   67328 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-19 02:35:13.880473265 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:35:13.889992   67328 docker.go:319] overlay module found
	I1219 02:35:13.891763   67328 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1219 02:35:13.892819   67328 start.go:309] selected driver: docker
	I1219 02:35:13.892835   67328 start.go:928] validating driver "docker" against &{Name:functional-382801 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-382801 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:35:13.892916   67328 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:35:13.894485   67328 out.go:203] 
	W1219 02:35:13.895619   67328 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1219 02:35:13.896726   67328 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (7.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-382801 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-382801 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-6h5j6" [f16898bc-4c5e-44aa-bd9b-5231adf5c8e1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-6h5j6" [f16898bc-4c5e-44aa-bd9b-5231adf5c8e1] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004035306s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32431
functional_test.go:1680: http://192.168.49.2:32431: success! body:
Request served by hello-node-connect-9f67c86d4-6h5j6

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32431
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (7.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (26.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [20180c0d-2548-4e50-bf80-078e0e1c0e70] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003453404s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-382801 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-382801 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-382801 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-382801 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [ac141591-1a49-46d6-b6c4-8aa347ad9154] Pending
helpers_test.go:353: "sp-pod" [ac141591-1a49-46d6-b6c4-8aa347ad9154] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [ac141591-1a49-46d6-b6c4-8aa347ad9154] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.087978853s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-382801 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-382801 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-382801 delete -f testdata/storage-provisioner/pod.yaml: (2.188915687s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-382801 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [c0e503e8-b519-4063-adfa-1046116d0dee] Pending
helpers_test.go:353: "sp-pod" [c0e503e8-b519-4063-adfa-1046116d0dee] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [c0e503e8-b519-4063-adfa-1046116d0dee] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.005132398s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-382801 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (26.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh -n functional-382801 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 cp functional-382801:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm3962573232/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh -n functional-382801 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh -n functional-382801 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (24.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-382801 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-hvl88" [3fa255b2-c0bd-498f-a357-986c821a3522] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-hvl88" [3fa255b2-c0bd-498f-a357-986c821a3522] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: app=mysql healthy within 16.004154682s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-382801 exec mysql-7d7b65bc95-hvl88 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-382801 exec mysql-7d7b65bc95-hvl88 -- mysql -ppassword -e "show databases;": exit status 1 (119.380939ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:35:31.638119    8536 retry.go:31] will retry after 859.242887ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-382801 exec mysql-7d7b65bc95-hvl88 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-382801 exec mysql-7d7b65bc95-hvl88 -- mysql -ppassword -e "show databases;": exit status 1 (129.360579ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:35:32.627620    8536 retry.go:31] will retry after 889.268152ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-382801 exec mysql-7d7b65bc95-hvl88 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-382801 exec mysql-7d7b65bc95-hvl88 -- mysql -ppassword -e "show databases;": exit status 1 (128.640204ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:35:33.646809    8536 retry.go:31] will retry after 3.218038409s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-382801 exec mysql-7d7b65bc95-hvl88 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-382801 exec mysql-7d7b65bc95-hvl88 -- mysql -ppassword -e "show databases;": exit status 1 (85.94499ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:35:36.952377    8536 retry.go:31] will retry after 3.042625115s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-382801 exec mysql-7d7b65bc95-hvl88 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (24.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/8536/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh "sudo cat /etc/test/nested/copy/8536/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/8536.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh "sudo cat /etc/ssl/certs/8536.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/8536.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh "sudo cat /usr/share/ca-certificates/8536.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/85362.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh "sudo cat /etc/ssl/certs/85362.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/85362.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh "sudo cat /usr/share/ca-certificates/85362.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-382801 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-382801 ssh "sudo systemctl is-active docker": exit status 1 (326.916302ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-382801 ssh "sudo systemctl is-active containerd": exit status 1 (339.180569ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-382801 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-382801
localhost/kicbase/echo-server:functional-382801
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kubernetesui/dashboard-auth:1.4.0
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-382801 image ls --format short --alsologtostderr:
I1219 02:35:25.163189   70590 out.go:360] Setting OutFile to fd 1 ...
I1219 02:35:25.163307   70590 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:35:25.163313   70590 out.go:374] Setting ErrFile to fd 2...
I1219 02:35:25.163319   70590 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:35:25.163590   70590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
I1219 02:35:25.164285   70590 config.go:182] Loaded profile config "functional-382801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:35:25.164452   70590 config.go:182] Loaded profile config "functional-382801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:35:25.165020   70590 cli_runner.go:164] Run: docker container inspect functional-382801 --format={{.State.Status}}
I1219 02:35:25.187628   70590 ssh_runner.go:195] Run: systemctl --version
I1219 02:35:25.187689   70590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-382801
I1219 02:35:25.212168   70590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/functional-382801/id_rsa Username:docker}
I1219 02:35:25.322991   70590 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-382801 image ls --format table --alsologtostderr:
┌──────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                      IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├──────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-scheduler                   │ v1.35.0-rc.1                          │ 73f80cdc073da │ 52.8MB │
│ registry.k8s.io/pause                            │ latest                                │ 350b164e7ae1d │ 247kB  │
│ docker.io/kicbase/echo-server                    │ latest                                │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server                    │ functional-382801                     │ 9056ab77afb8e │ 4.95MB │
│ docker.io/kindest/kindnetd                       │ v20250512-df8de77b                    │ 409467f978b4a │ 109MB  │
│ docker.io/kubernetesui/dashboard-web             │ 1.7.0                                 │ 59f642f485d26 │ 193MB  │
│ localhost/minikube-local-cache-test              │ functional-382801                     │ a382b7e788f77 │ 3.33kB │
│ public.ecr.aws/nginx/nginx                       │ alpine                                │ 04da2b0513cd7 │ 55.2MB │
│ registry.k8s.io/coredns/coredns                  │ v1.13.1                               │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/etcd                             │ 3.6.6-0                               │ 0a108f7189562 │ 63.6MB │
│ registry.k8s.io/kube-controller-manager          │ v1.35.0-rc.1                          │ 5032a56602e1b │ 76.9MB │
│ docker.io/kubernetesui/dashboard-api             │ 1.14.0                                │ a0607af4fcd8a │ 55.2MB │
│ docker.io/kubernetesui/dashboard-auth            │ 1.4.0                                 │ dd54374d0ab14 │ 49.3MB │
│ gcr.io/k8s-minikube/busybox                      │ 1.28.4-glibc                          │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox                      │ latest                                │ beae173ccac6a │ 1.46MB │
│ localhost/my-image                               │ functional-382801                     │ f7abd19e3c440 │ 1.47MB │
│ registry.k8s.io/pause                            │ 3.1                                   │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                            │ 3.10.1                                │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kubernetesui/dashboard-metrics-scraper │ 1.2.2                                 │ d9cbc9f4053ca │ 38.9MB │
│ registry.k8s.io/kube-apiserver                   │ v1.35.0-rc.1                          │ 58865405a13bc │ 90.8MB │
│ registry.k8s.io/pause                            │ 3.3                                   │ 0184c1613d929 │ 686kB  │
│ public.ecr.aws/docker/library/mysql              │ 8.4                                   │ 20d0be4ee4524 │ 804MB  │
│ docker.io/kindest/kindnetd                       │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ 4921d7a6dffa9 │ 108MB  │
│ gcr.io/k8s-minikube/storage-provisioner          │ v5                                    │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-proxy                       │ v1.35.0-rc.1                          │ af0321f3a4f38 │ 72MB   │
└──────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-382801 image ls --format table --alsologtostderr:
I1219 02:35:31.192242   71884 out.go:360] Setting OutFile to fd 1 ...
I1219 02:35:31.192494   71884 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:35:31.192503   71884 out.go:374] Setting ErrFile to fd 2...
I1219 02:35:31.192507   71884 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:35:31.192725   71884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
I1219 02:35:31.193415   71884 config.go:182] Loaded profile config "functional-382801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:35:31.193556   71884 config.go:182] Loaded profile config "functional-382801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:35:31.194267   71884 cli_runner.go:164] Run: docker container inspect functional-382801 --format={{.State.Status}}
I1219 02:35:31.215410   71884 ssh_runner.go:195] Run: systemctl --version
I1219 02:35:31.215454   71884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-382801
I1219 02:35:31.236350   71884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/functional-382801/id_rsa Username:docker}
I1219 02:35:31.346198   71884 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-382801 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"f7abd19e3c4406670a4d5572d42e69a059b077deef37ad0916cd7cb5a7a9fc53","repoDigests":["localhost/my-image@sha256:911e11bee628809c6bbf666185a9c9f9a9ee087d14bc72d62bd04f0038d0f076"],"repoTags":["localhost/my-image:functional-382801"],"size":"1468743"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size
":"803724943"},{"id":"04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDigests":["public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c","public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55157106"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"d9cbc9f4053ca11fa6edb814b512ab807a89346934700a3a425fc1e24a3f2167","repoDigests":["docker.io/kubernetesui/dashboard-metrics-scraper@sha256:0cdefa0492f2868b93f43880ffad4bd8ae8105c02dc5661352a0a40723b05dbc","docker.io/kubernetesui/dashboard-metrics-scraper@sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775"],"repoTags":["docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2"],"s
ize":"38883226"},{"id":"a382b7e788f77dfd1bc9261dc52bad901db8c99031cc524bdb4aa5e5c1de02df","repoDigests":["localhost/minikube-local-cache-test@sha256:8e54c50b8765cec1b723f38f18f12f9b16304e47baba4f6bbc8682fb2663b346"],"repoTags":["localhost/minikube-local-cache-test:functional-382801"],"size":"3330"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98","registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd"],"repoTags":["registry.k8s.io/kub
e-controller-manager:v1.35.0-rc.1"],"size":"76893010"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae","docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"107598204"},{"id":"a0607af4fcd8ae78708c5bd51f34ccf8442967b8e41f56f008cc2884690f2f3b","repoDigests":["docker.io/kubernetesui/dashboard-api@sha256:2bd14c0ffee99d15fb1595644ebd1083ac32c5157c6e6fd8615b0f556a1390c2","docker.io/kubernetesui/dashboard-api@sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff"],"repoTags":["docker.io/k
ubernetesui/dashboard-api:1.14.0"],"size":"55164394"},{"id":"216e011016ed12179f4179e5b36adaca49feff52d2ee3f5b40288bbda2bb8633","repoDigests":["docker.io/library/062b80103aae058dd25f338e1e1ed6f11b9d14391d6295b676381974ae699a84-tmp@sha256:d651a5fd1fab9232d678c8dc7662d196b6dc4da751f0450f6ac5aece53982f44"],"repoTags":[],"size":"1466131"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce","repoDigests":["registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f","registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"],"repoTags":["registry.k8s.io
/kube-apiserver:v1.35.0-rc.1"],"size":"90844140"},{"id":"73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636","registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"52763474"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"dd
54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1","repoDigests":["docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052","docker.io/kubernetesui/dashboard-auth@sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff"],"repoTags":["docker.io/kubernetesui/dashboard-auth:1.4.0"],"size":"49315433"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a","registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a712722589
0"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"63582405"},{"id":"af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9","registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"71986585"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1dd
b9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-382801"],"size":"4945246"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06","repoDigests":["docker.io/kubernetesui/dashboard-web@sha256:5924e55a2eb65774b68e35fd0d0b41a3ef5dc6ef3e02dce6e340cf59b6d67d30","docker.io/kubernetesui/dashboard-web@sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d"],"repoTags":["docker.io/kubernetesui/dashboard-web:1.7.0"],"size":"193323269"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-382801 image ls --format json --alsologtostderr:
I1219 02:35:30.203250   71554 out.go:360] Setting OutFile to fd 1 ...
I1219 02:35:30.203627   71554 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:35:30.203734   71554 out.go:374] Setting ErrFile to fd 2...
I1219 02:35:30.203765   71554 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:35:30.204213   71554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
I1219 02:35:30.205172   71554 config.go:182] Loaded profile config "functional-382801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:35:30.205362   71554 config.go:182] Loaded profile config "functional-382801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:35:30.206038   71554 cli_runner.go:164] Run: docker container inspect functional-382801 --format={{.State.Status}}
I1219 02:35:30.235159   71554 ssh_runner.go:195] Run: systemctl --version
I1219 02:35:30.235240   71554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-382801
I1219 02:35:30.262034   71554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/functional-382801/id_rsa Username:docker}
I1219 02:35:30.370646   71554 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-382801 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-382801
size: "4945246"
- id: 4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
- docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "107598204"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636
- registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "52763474"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1
repoDigests:
- docker.io/kubernetesui/dashboard-auth@sha256:484c65877a3aa9e7095745576e55310f09f135b2fe5d9694289352fe65986052
- docker.io/kubernetesui/dashboard-auth@sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff
repoTags:
- docker.io/kubernetesui/dashboard-auth:1.4.0
size: "49315433"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a382b7e788f77dfd1bc9261dc52bad901db8c99031cc524bdb4aa5e5c1de02df
repoDigests:
- localhost/minikube-local-cache-test@sha256:8e54c50b8765cec1b723f38f18f12f9b16304e47baba4f6bbc8682fb2663b346
repoTags:
- localhost/minikube-local-cache-test:functional-382801
size: "3330"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "63582405"
- id: af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9
- registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "71986585"
- id: 04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:00e053577693e0ee5f7f8b433cdb249624af188622d0da5df20eef4e25a0881c
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55157106"
- id: 58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f
- registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "90844140"
- id: 5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98
- registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "76893010"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-382801 image ls --format yaml --alsologtostderr:
I1219 02:35:25.540887   70650 out.go:360] Setting OutFile to fd 1 ...
I1219 02:35:25.541151   70650 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:35:25.541161   70650 out.go:374] Setting ErrFile to fd 2...
I1219 02:35:25.541166   70650 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:35:25.541364   70650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
I1219 02:35:25.541938   70650 config.go:182] Loaded profile config "functional-382801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:35:25.542037   70650 config.go:182] Loaded profile config "functional-382801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:35:25.542426   70650 cli_runner.go:164] Run: docker container inspect functional-382801 --format={{.State.Status}}
I1219 02:35:25.562135   70650 ssh_runner.go:195] Run: systemctl --version
I1219 02:35:25.562198   70650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-382801
I1219 02:35:25.581784   70650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/functional-382801/id_rsa Username:docker}
I1219 02:35:25.694566   70650 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (5.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-382801 ssh pgrep buildkitd: exit status 1 (286.57424ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 image build -t localhost/my-image:functional-382801 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-382801 image build -t localhost/my-image:functional-382801 testdata/build --alsologtostderr: (3.667513386s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-382801 image build -t localhost/my-image:functional-382801 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 216e011016e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-382801
--> f7abd19e3c4
Successfully tagged localhost/my-image:functional-382801
f7abd19e3c4406670a4d5572d42e69a059b077deef37ad0916cd7cb5a7a9fc53
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-382801 image build -t localhost/my-image:functional-382801 testdata/build --alsologtostderr:
I1219 02:35:26.278843   70906 out.go:360] Setting OutFile to fd 1 ...
I1219 02:35:26.279129   70906 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:35:26.279142   70906 out.go:374] Setting ErrFile to fd 2...
I1219 02:35:26.279150   70906 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:35:26.279464   70906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
I1219 02:35:26.280246   70906 config.go:182] Loaded profile config "functional-382801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:35:26.281037   70906 config.go:182] Loaded profile config "functional-382801": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1219 02:35:26.282067   70906 cli_runner.go:164] Run: docker container inspect functional-382801 --format={{.State.Status}}
I1219 02:35:26.305051   70906 ssh_runner.go:195] Run: systemctl --version
I1219 02:35:26.305115   70906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-382801
I1219 02:35:26.326719   70906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/functional-382801/id_rsa Username:docker}
I1219 02:35:26.439404   70906 build_images.go:162] Building image from path: /tmp/build.1453280595.tar
I1219 02:35:26.439480   70906 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1219 02:35:26.450591   70906 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1453280595.tar
I1219 02:35:26.454871   70906 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1453280595.tar: stat -c "%s %y" /var/lib/minikube/build/build.1453280595.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1453280595.tar': No such file or directory
I1219 02:35:26.454900   70906 ssh_runner.go:362] scp /tmp/build.1453280595.tar --> /var/lib/minikube/build/build.1453280595.tar (3072 bytes)
I1219 02:35:26.475694   70906 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1453280595
I1219 02:35:26.483984   70906 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1453280595 -xf /var/lib/minikube/build/build.1453280595.tar
I1219 02:35:26.492817   70906 crio.go:315] Building image: /var/lib/minikube/build/build.1453280595
I1219 02:35:26.492917   70906 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-382801 /var/lib/minikube/build/build.1453280595 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1219 02:35:29.848854   70906 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-382801 /var/lib/minikube/build/build.1453280595 --cgroup-manager=cgroupfs: (3.3559022s)
I1219 02:35:29.848945   70906 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1453280595
I1219 02:35:29.861132   70906 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1453280595.tar
I1219 02:35:29.872469   70906 build_images.go:218] Built localhost/my-image:functional-382801 from /tmp/build.1453280595.tar
I1219 02:35:29.872507   70906 build_images.go:134] succeeded building to: functional-382801
I1219 02:35:29.872514   70906 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-382801 image ls: (1.225956182s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (5.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-382801
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 image load --daemon kicbase/echo-server:functional-382801 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-382801 image load --daemon kicbase/echo-server:functional-382801 --alsologtostderr: (1.066399489s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (7.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-382801 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-382801 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-2mrns" [f062e3c3-e700-49ff-88b0-ade1ac448aaf] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-2mrns" [f062e3c3-e700-49ff-88b0-ade1ac448aaf] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003811484s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (7.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 image load --daemon kicbase/echo-server:functional-382801 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-382801 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-382801 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-382801 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 64119: os: process already finished
helpers_test.go:520: unable to terminate pid 63863: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-382801 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-382801
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 image load --daemon kicbase/echo-server:functional-382801 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-382801 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (7.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-382801 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [81fe6e95-031f-4b51-915d-66d50958da91] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [81fe6e95-031f-4b51-915d-66d50958da91] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 7.004170883s
I1219 02:35:11.558727    8536 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (7.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 image save kicbase/echo-server:functional-382801 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 image rm kicbase/echo-server:functional-382801 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-382801
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 image save --daemon kicbase/echo-server:functional-382801 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-382801
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 service list -o json
functional_test.go:1504: Took "509.787573ms" to run "out/minikube-linux-amd64 -p functional-382801 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 service --namespace=default --https --url hello-node
I1219 02:35:10.381437    8536 detect.go:223] nested VM detected
functional_test.go:1532: found endpoint: https://192.168.49.2:31442
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31442
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-382801 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.66.168 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-382801 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "380.184626ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "81.92835ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (7.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-382801 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun315279662/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766111712450310185" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun315279662/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766111712450310185" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun315279662/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766111712450310185" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun315279662/001/test-1766111712450310185
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-382801 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (308.22729ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:35:12.758887    8536 retry.go:31] will retry after 552.818847ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 19 02:35 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 19 02:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 19 02:35 test-1766111712450310185
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh cat /mount-9p/test-1766111712450310185
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-382801 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [a595752f-4e35-4685-aeff-c4003f2c2bde] Pending
helpers_test.go:353: "busybox-mount" [a595752f-4e35-4685-aeff-c4003f2c2bde] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [a595752f-4e35-4685-aeff-c4003f2c2bde] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [a595752f-4e35-4685-aeff-c4003f2c2bde] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.017937803s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-382801 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-382801 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun315279662/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (7.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "353.271912ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "58.796946ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (2.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-382801 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4009361728/001:/mount-9p --alsologtostderr -v=1 --port 45901]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-382801 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (328.701632ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:35:20.035339    8536 retry.go:31] will retry after 597.780103ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh -- ls -la /mount-9p
I1219 02:35:20.994803    8536 detect.go:223] nested VM detected
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-382801 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4009361728/001:/mount-9p --alsologtostderr -v=1 --port 45901] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-382801 ssh "sudo umount -f /mount-9p": exit status 1 (332.12714ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-382801 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-382801 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4009361728/001:/mount-9p --alsologtostderr -v=1 --port 45901] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (2.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-382801 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1586544246/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-382801 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1586544246/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-382801 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1586544246/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-382801 ssh "findmnt -T" /mount1: exit status 1 (404.797116ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:35:22.251550    8536 retry.go:31] will retry after 472.755683ms: exit status 1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-382801 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-382801 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-382801 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1586544246/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-382801 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1586544246/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-382801 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1586544246/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-382801
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-382801
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-382801
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (107.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1219 02:36:38.531624    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:37:06.221444    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-314046 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m47.048428603s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (107.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (3.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-314046 kubectl -- rollout status deployment/busybox: (1.845845535s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 kubectl -- exec busybox-7b57f96db7-58sf6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 kubectl -- exec busybox-7b57f96db7-sjk2d -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 kubectl -- exec busybox-7b57f96db7-xg4jg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 kubectl -- exec busybox-7b57f96db7-58sf6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 kubectl -- exec busybox-7b57f96db7-sjk2d -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 kubectl -- exec busybox-7b57f96db7-xg4jg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 kubectl -- exec busybox-7b57f96db7-58sf6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 kubectl -- exec busybox-7b57f96db7-sjk2d -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 kubectl -- exec busybox-7b57f96db7-xg4jg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (3.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 kubectl -- exec busybox-7b57f96db7-58sf6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 kubectl -- exec busybox-7b57f96db7-58sf6 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 kubectl -- exec busybox-7b57f96db7-sjk2d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 kubectl -- exec busybox-7b57f96db7-sjk2d -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 kubectl -- exec busybox-7b57f96db7-xg4jg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 kubectl -- exec busybox-7b57f96db7-xg4jg -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 node add --alsologtostderr -v 5
E1219 02:37:42.700037    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:37:42.706003    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:37:42.716314    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:37:42.736821    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:37:42.777112    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:37:42.857492    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:37:43.018057    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:37:43.338435    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:37:43.979590    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:37:45.260561    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:37:47.821313    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:37:52.941724    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-314046 node add --alsologtostderr -v 5: (23.378666322s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-314046 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 cp testdata/cp-test.txt ha-314046:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 cp ha-314046:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2403804637/001/cp-test_ha-314046.txt
E1219 02:38:03.182563    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 cp ha-314046:/home/docker/cp-test.txt ha-314046-m02:/home/docker/cp-test_ha-314046_ha-314046-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m02 "sudo cat /home/docker/cp-test_ha-314046_ha-314046-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 cp ha-314046:/home/docker/cp-test.txt ha-314046-m03:/home/docker/cp-test_ha-314046_ha-314046-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m03 "sudo cat /home/docker/cp-test_ha-314046_ha-314046-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 cp ha-314046:/home/docker/cp-test.txt ha-314046-m04:/home/docker/cp-test_ha-314046_ha-314046-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m04 "sudo cat /home/docker/cp-test_ha-314046_ha-314046-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 cp testdata/cp-test.txt ha-314046-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 cp ha-314046-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2403804637/001/cp-test_ha-314046-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 cp ha-314046-m02:/home/docker/cp-test.txt ha-314046:/home/docker/cp-test_ha-314046-m02_ha-314046.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046 "sudo cat /home/docker/cp-test_ha-314046-m02_ha-314046.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 cp ha-314046-m02:/home/docker/cp-test.txt ha-314046-m03:/home/docker/cp-test_ha-314046-m02_ha-314046-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m03 "sudo cat /home/docker/cp-test_ha-314046-m02_ha-314046-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 cp ha-314046-m02:/home/docker/cp-test.txt ha-314046-m04:/home/docker/cp-test_ha-314046-m02_ha-314046-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m04 "sudo cat /home/docker/cp-test_ha-314046-m02_ha-314046-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 cp testdata/cp-test.txt ha-314046-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 cp ha-314046-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2403804637/001/cp-test_ha-314046-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 cp ha-314046-m03:/home/docker/cp-test.txt ha-314046:/home/docker/cp-test_ha-314046-m03_ha-314046.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046 "sudo cat /home/docker/cp-test_ha-314046-m03_ha-314046.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 cp ha-314046-m03:/home/docker/cp-test.txt ha-314046-m02:/home/docker/cp-test_ha-314046-m03_ha-314046-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m02 "sudo cat /home/docker/cp-test_ha-314046-m03_ha-314046-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 cp ha-314046-m03:/home/docker/cp-test.txt ha-314046-m04:/home/docker/cp-test_ha-314046-m03_ha-314046-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m04 "sudo cat /home/docker/cp-test_ha-314046-m03_ha-314046-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 cp testdata/cp-test.txt ha-314046-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 cp ha-314046-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2403804637/001/cp-test_ha-314046-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 cp ha-314046-m04:/home/docker/cp-test.txt ha-314046:/home/docker/cp-test_ha-314046-m04_ha-314046.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046 "sudo cat /home/docker/cp-test_ha-314046-m04_ha-314046.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 cp ha-314046-m04:/home/docker/cp-test.txt ha-314046-m02:/home/docker/cp-test_ha-314046-m04_ha-314046-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m02 "sudo cat /home/docker/cp-test_ha-314046-m04_ha-314046-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 cp ha-314046-m04:/home/docker/cp-test.txt ha-314046-m03:/home/docker/cp-test_ha-314046-m04_ha-314046-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 ssh -n ha-314046-m03 "sudo cat /home/docker/cp-test_ha-314046-m04_ha-314046-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 node stop m02 --alsologtostderr -v 5
E1219 02:38:23.663517    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-314046 node stop m02 --alsologtostderr -v 5: (19.080877078s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-314046 status --alsologtostderr -v 5: exit status 7 (711.557402ms)

                                                
                                                
-- stdout --
	ha-314046
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-314046-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-314046-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-314046-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:38:38.308952   93685 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:38:38.309221   93685 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:38:38.309231   93685 out.go:374] Setting ErrFile to fd 2...
	I1219 02:38:38.309236   93685 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:38:38.309420   93685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:38:38.309586   93685 out.go:368] Setting JSON to false
	I1219 02:38:38.309608   93685 mustload.go:66] Loading cluster: ha-314046
	I1219 02:38:38.309741   93685 notify.go:221] Checking for updates...
	I1219 02:38:38.309977   93685 config.go:182] Loaded profile config "ha-314046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:38:38.309991   93685 status.go:174] checking status of ha-314046 ...
	I1219 02:38:38.310463   93685 cli_runner.go:164] Run: docker container inspect ha-314046 --format={{.State.Status}}
	I1219 02:38:38.329983   93685 status.go:371] ha-314046 host status = "Running" (err=<nil>)
	I1219 02:38:38.330014   93685 host.go:66] Checking if "ha-314046" exists ...
	I1219 02:38:38.330299   93685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-314046
	I1219 02:38:38.348640   93685 host.go:66] Checking if "ha-314046" exists ...
	I1219 02:38:38.348981   93685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 02:38:38.349027   93685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-314046
	I1219 02:38:38.367455   93685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/ha-314046/id_rsa Username:docker}
	I1219 02:38:38.467264   93685 ssh_runner.go:195] Run: systemctl --version
	I1219 02:38:38.473619   93685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 02:38:38.486676   93685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:38:38.542979   93685 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-19 02:38:38.533010642 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:38:38.543512   93685 kubeconfig.go:125] found "ha-314046" server: "https://192.168.49.254:8443"
	I1219 02:38:38.543539   93685 api_server.go:166] Checking apiserver status ...
	I1219 02:38:38.543572   93685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 02:38:38.555548   93685 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1245/cgroup
	W1219 02:38:38.564007   93685 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1245/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1219 02:38:38.564082   93685 ssh_runner.go:195] Run: ls
	I1219 02:38:38.567949   93685 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1219 02:38:38.573680   93685 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1219 02:38:38.573774   93685 status.go:463] ha-314046 apiserver status = Running (err=<nil>)
	I1219 02:38:38.573796   93685 status.go:176] ha-314046 status: &{Name:ha-314046 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 02:38:38.573812   93685 status.go:174] checking status of ha-314046-m02 ...
	I1219 02:38:38.574044   93685 cli_runner.go:164] Run: docker container inspect ha-314046-m02 --format={{.State.Status}}
	I1219 02:38:38.592305   93685 status.go:371] ha-314046-m02 host status = "Stopped" (err=<nil>)
	I1219 02:38:38.592351   93685 status.go:384] host is not running, skipping remaining checks
	I1219 02:38:38.592361   93685 status.go:176] ha-314046-m02 status: &{Name:ha-314046-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 02:38:38.592388   93685 status.go:174] checking status of ha-314046-m03 ...
	I1219 02:38:38.592669   93685 cli_runner.go:164] Run: docker container inspect ha-314046-m03 --format={{.State.Status}}
	I1219 02:38:38.610646   93685 status.go:371] ha-314046-m03 host status = "Running" (err=<nil>)
	I1219 02:38:38.610675   93685 host.go:66] Checking if "ha-314046-m03" exists ...
	I1219 02:38:38.611029   93685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-314046-m03
	I1219 02:38:38.630359   93685 host.go:66] Checking if "ha-314046-m03" exists ...
	I1219 02:38:38.630597   93685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 02:38:38.630629   93685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-314046-m03
	I1219 02:38:38.649304   93685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/ha-314046-m03/id_rsa Username:docker}
	I1219 02:38:38.749517   93685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 02:38:38.762657   93685 kubeconfig.go:125] found "ha-314046" server: "https://192.168.49.254:8443"
	I1219 02:38:38.762692   93685 api_server.go:166] Checking apiserver status ...
	I1219 02:38:38.762750   93685 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 02:38:38.774769   93685 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W1219 02:38:38.783062   93685 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1219 02:38:38.783115   93685 ssh_runner.go:195] Run: ls
	I1219 02:38:38.786983   93685 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1219 02:38:38.791052   93685 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1219 02:38:38.791074   93685 status.go:463] ha-314046-m03 apiserver status = Running (err=<nil>)
	I1219 02:38:38.791081   93685 status.go:176] ha-314046-m03 status: &{Name:ha-314046-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 02:38:38.791104   93685 status.go:174] checking status of ha-314046-m04 ...
	I1219 02:38:38.791342   93685 cli_runner.go:164] Run: docker container inspect ha-314046-m04 --format={{.State.Status}}
	I1219 02:38:38.809824   93685 status.go:371] ha-314046-m04 host status = "Running" (err=<nil>)
	I1219 02:38:38.809847   93685 host.go:66] Checking if "ha-314046-m04" exists ...
	I1219 02:38:38.810098   93685 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-314046-m04
	I1219 02:38:38.827931   93685 host.go:66] Checking if "ha-314046-m04" exists ...
	I1219 02:38:38.828175   93685 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 02:38:38.828209   93685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-314046-m04
	I1219 02:38:38.846354   93685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/ha-314046-m04/id_rsa Username:docker}
	I1219 02:38:38.945991   93685 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 02:38:38.959479   93685 status.go:176] ha-314046-m04 status: &{Name:ha-314046-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-314046 node start m02 --alsologtostderr -v 5: (7.751593824s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (103.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 stop --alsologtostderr -v 5
E1219 02:39:04.624679    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-314046 stop --alsologtostderr -v 5: (45.009733306s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 start --wait true --alsologtostderr -v 5
E1219 02:40:02.345816    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:40:02.351495    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:40:02.361839    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:40:02.382862    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:40:02.423205    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:40:02.503565    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:40:02.663894    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:40:02.984576    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:40:03.625545    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:40:04.906092    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:40:07.467231    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:40:12.588254    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:40:22.828730    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:40:26.546122    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-314046 start --wait true --alsologtostderr -v 5: (58.794652015s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (103.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-314046 node delete m03 --alsologtostderr -v 5: (9.791440305s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 status --alsologtostderr -v 5
E1219 02:40:43.309561    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (42.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 stop --alsologtostderr -v 5
E1219 02:41:24.271099    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-314046 stop --alsologtostderr -v 5: (42.215028948s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-314046 status --alsologtostderr -v 5: exit status 7 (116.771525ms)

                                                
                                                
-- stdout --
	ha-314046
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-314046-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-314046-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:41:26.896107  108002 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:41:26.896382  108002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:41:26.896395  108002 out.go:374] Setting ErrFile to fd 2...
	I1219 02:41:26.896400  108002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:41:26.896611  108002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:41:26.896816  108002 out.go:368] Setting JSON to false
	I1219 02:41:26.896848  108002 mustload.go:66] Loading cluster: ha-314046
	I1219 02:41:26.897022  108002 notify.go:221] Checking for updates...
	I1219 02:41:26.897492  108002 config.go:182] Loaded profile config "ha-314046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:41:26.897514  108002 status.go:174] checking status of ha-314046 ...
	I1219 02:41:26.898073  108002 cli_runner.go:164] Run: docker container inspect ha-314046 --format={{.State.Status}}
	I1219 02:41:26.917670  108002 status.go:371] ha-314046 host status = "Stopped" (err=<nil>)
	I1219 02:41:26.917740  108002 status.go:384] host is not running, skipping remaining checks
	I1219 02:41:26.917746  108002 status.go:176] ha-314046 status: &{Name:ha-314046 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 02:41:26.917782  108002 status.go:174] checking status of ha-314046-m02 ...
	I1219 02:41:26.918026  108002 cli_runner.go:164] Run: docker container inspect ha-314046-m02 --format={{.State.Status}}
	I1219 02:41:26.935424  108002 status.go:371] ha-314046-m02 host status = "Stopped" (err=<nil>)
	I1219 02:41:26.935459  108002 status.go:384] host is not running, skipping remaining checks
	I1219 02:41:26.935468  108002 status.go:176] ha-314046-m02 status: &{Name:ha-314046-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 02:41:26.935487  108002 status.go:174] checking status of ha-314046-m04 ...
	I1219 02:41:26.935747  108002 cli_runner.go:164] Run: docker container inspect ha-314046-m04 --format={{.State.Status}}
	I1219 02:41:26.952983  108002 status.go:371] ha-314046-m04 host status = "Stopped" (err=<nil>)
	I1219 02:41:26.953025  108002 status.go:384] host is not running, skipping remaining checks
	I1219 02:41:26.953035  108002 status.go:176] ha-314046-m04 status: &{Name:ha-314046-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (42.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (53.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1219 02:41:38.531632    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-314046 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (52.482670207s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (53.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 node add --control-plane --alsologtostderr -v 5
E1219 02:42:42.699950    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:42:46.192454    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-314046 node add --control-plane --alsologtostderr -v 5: (43.78687164s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-314046 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (42.08s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-749966 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-749966 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (42.081043693s)
--- PASS: TestJSONOutput/start/Command (42.08s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.06s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-749966 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-749966 --output=json --user=testUser: (6.05936684s)
--- PASS: TestJSONOutput/stop/Command (6.06s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-273319 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-273319 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (74.892933ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"34c85b81-6d9a-443c-a1f6-3e6decfdba93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-273319] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c40ddaa4-4101-4191-aeee-8ce3a7a6660d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22230"}}
	{"specversion":"1.0","id":"3055d171-21c3-4daa-a977-dd5ed4324b74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6f635fed-800c-4f59-94de-e4b52f657098","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig"}}
	{"specversion":"1.0","id":"df8c296f-dd7c-4da0-b3d5-6ddf8ec8e74b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube"}}
	{"specversion":"1.0","id":"4ef69363-8b4e-42c2-8127-f69be7f680f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0aa2c3bd-1f54-46e1-ace7-1e249c84f7f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a1ec3864-48e1-4cfd-b98a-70f836a6aa7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-273319" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-273319
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.71s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-870073 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-870073 --network=: (24.552494442s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-870073" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-870073
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-870073: (2.13571012s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.71s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.86s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-437658 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-437658 --network=bridge: (22.83715953s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-437658" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-437658
E1219 02:45:02.345401    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-437658: (1.999770047s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.86s)

                                                
                                    
x
+
TestKicExistingNetwork (25.31s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1219 02:45:03.413272    8536 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1219 02:45:03.430045    8536 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1219 02:45:03.430115    8536 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1219 02:45:03.430130    8536 cli_runner.go:164] Run: docker network inspect existing-network
W1219 02:45:03.447344    8536 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1219 02:45:03.447376    8536 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1219 02:45:03.447395    8536 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1219 02:45:03.447520    8536 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1219 02:45:03.464608    8536 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d70e62b79a31 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:cf:22:72:cb:a0} reservation:<nil>}
I1219 02:45:03.464960    8536 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0054392f0}
I1219 02:45:03.464989    8536 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1219 02:45:03.465025    8536 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1219 02:45:03.511098    8536 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-120417 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-120417 --network=existing-network: (23.166286388s)
helpers_test.go:176: Cleaning up "existing-network-120417" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-120417
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-120417: (2.015980762s)
I1219 02:45:28.710293    8536 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.31s)

                                                
                                    
x
+
TestKicCustomSubnet (24.52s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-890625 --subnet=192.168.60.0/24
E1219 02:45:30.032863    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-890625 --subnet=192.168.60.0/24: (22.349986189s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-890625 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-890625" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-890625
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-890625: (2.149860072s)
--- PASS: TestKicCustomSubnet (24.52s)

                                                
                                    
x
+
TestKicStaticIP (25.34s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-300035 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-300035 --static-ip=192.168.200.200: (23.035209415s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-300035 ip
helpers_test.go:176: Cleaning up "static-ip-300035" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-300035
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-300035: (2.160594973s)
--- PASS: TestKicStaticIP (25.34s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (50.69s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-307206 --driver=docker  --container-runtime=crio
E1219 02:46:38.531368    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-307206 --driver=docker  --container-runtime=crio: (20.774961111s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-310494 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-310494 --driver=docker  --container-runtime=crio: (23.851237697s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-307206
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-310494
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-310494" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-310494
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-310494: (2.380737541s)
helpers_test.go:176: Cleaning up "first-307206" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-307206
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-307206: (2.441631024s)
--- PASS: TestMinikubeProfile (50.69s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-679036 --memory=3072 --mount-string /tmp/TestMountStartserial3895091795/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-679036 --memory=3072 --mount-string /tmp/TestMountStartserial3895091795/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.752113644s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-679036 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-695725 --memory=3072 --mount-string /tmp/TestMountStartserial3895091795/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-695725 --memory=3072 --mount-string /tmp/TestMountStartserial3895091795/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.879476578s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-695725 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-679036 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-679036 --alsologtostderr -v=5: (1.67425336s)
--- PASS: TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-695725 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-695725
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-695725: (1.248045515s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.32s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-695725
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-695725: (6.322416864s)
--- PASS: TestMountStart/serial/RestartStopped (7.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-695725 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (67.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-285296 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1219 02:47:42.700082    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:48:01.581986    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-285296 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m7.442046322s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (67.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-285296 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-285296 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-285296 -- rollout status deployment/busybox: (2.584116059s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-285296 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-285296 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-285296 -- exec busybox-7b57f96db7-55rsd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-285296 -- exec busybox-7b57f96db7-xbrm2 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-285296 -- exec busybox-7b57f96db7-55rsd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-285296 -- exec busybox-7b57f96db7-xbrm2 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-285296 -- exec busybox-7b57f96db7-55rsd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-285296 -- exec busybox-7b57f96db7-xbrm2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.02s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-285296 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-285296 -- exec busybox-7b57f96db7-55rsd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-285296 -- exec busybox-7b57f96db7-55rsd -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-285296 -- exec busybox-7b57f96db7-xbrm2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-285296 -- exec busybox-7b57f96db7-xbrm2 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-285296 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-285296 -v=5 --alsologtostderr: (26.454135617s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.11s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-285296 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 cp testdata/cp-test.txt multinode-285296:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 ssh -n multinode-285296 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 cp multinode-285296:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile230805183/001/cp-test_multinode-285296.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 ssh -n multinode-285296 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 cp multinode-285296:/home/docker/cp-test.txt multinode-285296-m02:/home/docker/cp-test_multinode-285296_multinode-285296-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 ssh -n multinode-285296 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 ssh -n multinode-285296-m02 "sudo cat /home/docker/cp-test_multinode-285296_multinode-285296-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 cp multinode-285296:/home/docker/cp-test.txt multinode-285296-m03:/home/docker/cp-test_multinode-285296_multinode-285296-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 ssh -n multinode-285296 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 ssh -n multinode-285296-m03 "sudo cat /home/docker/cp-test_multinode-285296_multinode-285296-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 cp testdata/cp-test.txt multinode-285296-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 ssh -n multinode-285296-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 cp multinode-285296-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile230805183/001/cp-test_multinode-285296-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 ssh -n multinode-285296-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 cp multinode-285296-m02:/home/docker/cp-test.txt multinode-285296:/home/docker/cp-test_multinode-285296-m02_multinode-285296.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 ssh -n multinode-285296-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 ssh -n multinode-285296 "sudo cat /home/docker/cp-test_multinode-285296-m02_multinode-285296.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 cp multinode-285296-m02:/home/docker/cp-test.txt multinode-285296-m03:/home/docker/cp-test_multinode-285296-m02_multinode-285296-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 ssh -n multinode-285296-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 ssh -n multinode-285296-m03 "sudo cat /home/docker/cp-test_multinode-285296-m02_multinode-285296-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 cp testdata/cp-test.txt multinode-285296-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 ssh -n multinode-285296-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 cp multinode-285296-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile230805183/001/cp-test_multinode-285296-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 ssh -n multinode-285296-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 cp multinode-285296-m03:/home/docker/cp-test.txt multinode-285296:/home/docker/cp-test_multinode-285296-m03_multinode-285296.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 ssh -n multinode-285296-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 ssh -n multinode-285296 "sudo cat /home/docker/cp-test_multinode-285296-m03_multinode-285296.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 cp multinode-285296-m03:/home/docker/cp-test.txt multinode-285296-m02:/home/docker/cp-test_multinode-285296-m03_multinode-285296-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 ssh -n multinode-285296-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 ssh -n multinode-285296-m02 "sudo cat /home/docker/cp-test_multinode-285296-m03_multinode-285296-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-285296 node stop m03: (1.262952812s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-285296 status: exit status 7 (500.500837ms)

                                                
                                                
-- stdout --
	multinode-285296
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-285296-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-285296-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-285296 status --alsologtostderr: exit status 7 (492.203579ms)

                                                
                                                
-- stdout --
	multinode-285296
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-285296-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-285296-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:49:30.621550  168243 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:49:30.621653  168243 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:49:30.621661  168243 out.go:374] Setting ErrFile to fd 2...
	I1219 02:49:30.621665  168243 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:49:30.621860  168243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:49:30.622060  168243 out.go:368] Setting JSON to false
	I1219 02:49:30.622085  168243 mustload.go:66] Loading cluster: multinode-285296
	I1219 02:49:30.622147  168243 notify.go:221] Checking for updates...
	I1219 02:49:30.622437  168243 config.go:182] Loaded profile config "multinode-285296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:49:30.622449  168243 status.go:174] checking status of multinode-285296 ...
	I1219 02:49:30.622977  168243 cli_runner.go:164] Run: docker container inspect multinode-285296 --format={{.State.Status}}
	I1219 02:49:30.641934  168243 status.go:371] multinode-285296 host status = "Running" (err=<nil>)
	I1219 02:49:30.641977  168243 host.go:66] Checking if "multinode-285296" exists ...
	I1219 02:49:30.642235  168243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-285296
	I1219 02:49:30.660117  168243 host.go:66] Checking if "multinode-285296" exists ...
	I1219 02:49:30.660455  168243 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 02:49:30.660503  168243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-285296
	I1219 02:49:30.678027  168243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/multinode-285296/id_rsa Username:docker}
	I1219 02:49:30.776026  168243 ssh_runner.go:195] Run: systemctl --version
	I1219 02:49:30.782471  168243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 02:49:30.794283  168243 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:49:30.848574  168243 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-19 02:49:30.839328004 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:49:30.849200  168243 kubeconfig.go:125] found "multinode-285296" server: "https://192.168.67.2:8443"
	I1219 02:49:30.849237  168243 api_server.go:166] Checking apiserver status ...
	I1219 02:49:30.849284  168243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 02:49:30.860498  168243 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1256/cgroup
	W1219 02:49:30.868659  168243 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1256/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1219 02:49:30.868734  168243 ssh_runner.go:195] Run: ls
	I1219 02:49:30.872282  168243 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1219 02:49:30.876369  168243 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1219 02:49:30.876389  168243 status.go:463] multinode-285296 apiserver status = Running (err=<nil>)
	I1219 02:49:30.876397  168243 status.go:176] multinode-285296 status: &{Name:multinode-285296 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 02:49:30.876410  168243 status.go:174] checking status of multinode-285296-m02 ...
	I1219 02:49:30.876658  168243 cli_runner.go:164] Run: docker container inspect multinode-285296-m02 --format={{.State.Status}}
	I1219 02:49:30.893993  168243 status.go:371] multinode-285296-m02 host status = "Running" (err=<nil>)
	I1219 02:49:30.894013  168243 host.go:66] Checking if "multinode-285296-m02" exists ...
	I1219 02:49:30.894242  168243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-285296-m02
	I1219 02:49:30.911185  168243 host.go:66] Checking if "multinode-285296-m02" exists ...
	I1219 02:49:30.911421  168243 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 02:49:30.911454  168243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-285296-m02
	I1219 02:49:30.928530  168243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/22230-4987/.minikube/machines/multinode-285296-m02/id_rsa Username:docker}
	I1219 02:49:31.026910  168243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 02:49:31.038989  168243 status.go:176] multinode-285296-m02 status: &{Name:multinode-285296-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1219 02:49:31.039029  168243 status.go:174] checking status of multinode-285296-m03 ...
	I1219 02:49:31.039302  168243 cli_runner.go:164] Run: docker container inspect multinode-285296-m03 --format={{.State.Status}}
	I1219 02:49:31.056931  168243 status.go:371] multinode-285296-m03 host status = "Stopped" (err=<nil>)
	I1219 02:49:31.056955  168243 status.go:384] host is not running, skipping remaining checks
	I1219 02:49:31.056962  168243 status.go:176] multinode-285296-m03 status: &{Name:multinode-285296-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-285296 node start m03 -v=5 --alsologtostderr: (6.511602823s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-285296
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-285296
E1219 02:50:02.348424    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-285296: (29.58315162s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-285296 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-285296 --wait=true -v=5 --alsologtostderr: (50.066199419s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-285296
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-285296 node delete m03: (4.661973288s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-285296 stop: (30.163386369s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-285296 status: exit status 7 (94.437835ms)

                                                
                                                
-- stdout --
	multinode-285296
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-285296-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-285296 status --alsologtostderr: exit status 7 (97.512904ms)

                                                
                                                
-- stdout --
	multinode-285296
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-285296-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:51:33.646321  178133 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:51:33.646585  178133 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:51:33.646596  178133 out.go:374] Setting ErrFile to fd 2...
	I1219 02:51:33.646600  178133 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:51:33.646825  178133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:51:33.646993  178133 out.go:368] Setting JSON to false
	I1219 02:51:33.647015  178133 mustload.go:66] Loading cluster: multinode-285296
	I1219 02:51:33.647098  178133 notify.go:221] Checking for updates...
	I1219 02:51:33.647370  178133 config.go:182] Loaded profile config "multinode-285296": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:51:33.647386  178133 status.go:174] checking status of multinode-285296 ...
	I1219 02:51:33.647834  178133 cli_runner.go:164] Run: docker container inspect multinode-285296 --format={{.State.Status}}
	I1219 02:51:33.666174  178133 status.go:371] multinode-285296 host status = "Stopped" (err=<nil>)
	I1219 02:51:33.666193  178133 status.go:384] host is not running, skipping remaining checks
	I1219 02:51:33.666200  178133 status.go:176] multinode-285296 status: &{Name:multinode-285296 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 02:51:33.666231  178133 status.go:174] checking status of multinode-285296-m02 ...
	I1219 02:51:33.666464  178133 cli_runner.go:164] Run: docker container inspect multinode-285296-m02 --format={{.State.Status}}
	I1219 02:51:33.683858  178133 status.go:371] multinode-285296-m02 host status = "Stopped" (err=<nil>)
	I1219 02:51:33.683898  178133 status.go:384] host is not running, skipping remaining checks
	I1219 02:51:33.683907  178133 status.go:176] multinode-285296-m02 status: &{Name:multinode-285296-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-285296 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1219 02:51:38.530878    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-285296 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (50.215064215s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-285296 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.82s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-285296
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-285296-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-285296-m02 --driver=docker  --container-runtime=crio: exit status 14 (76.191172ms)

                                                
                                                
-- stdout --
	* [multinode-285296-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-285296-m02' is duplicated with machine name 'multinode-285296-m02' in profile 'multinode-285296'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-285296-m03 --driver=docker  --container-runtime=crio
E1219 02:52:42.700277    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-285296-m03 --driver=docker  --container-runtime=crio: (19.829041965s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-285296
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-285296: exit status 80 (303.886019ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-285296 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-285296-m03 already exists in multinode-285296-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-285296-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-285296-m03: (2.36763218s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.63s)

                                                
                                    
x
+
TestPreload (104.36s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-994180 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-994180 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (48.146736935s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-994180 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-994180 image pull gcr.io/k8s-minikube/busybox: (1.436684807s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-994180
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-994180: (8.020164536s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-994180 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1219 02:54:05.747525    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-994180 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (44.092205966s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-994180 image list
helpers_test.go:176: Cleaning up "test-preload-994180" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-994180
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-994180: (2.427188509s)
--- PASS: TestPreload (104.36s)

                                                
                                    
x
+
TestScheduledStopUnix (98.57s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-759961 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-759961 --memory=3072 --driver=docker  --container-runtime=crio: (21.83274154s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-759961 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1219 02:54:57.614900  195269 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:54:57.615001  195269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:54:57.615012  195269 out.go:374] Setting ErrFile to fd 2...
	I1219 02:54:57.615018  195269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:54:57.615266  195269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:54:57.615540  195269 out.go:368] Setting JSON to false
	I1219 02:54:57.615652  195269 mustload.go:66] Loading cluster: scheduled-stop-759961
	I1219 02:54:57.616004  195269 config.go:182] Loaded profile config "scheduled-stop-759961": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:54:57.616092  195269 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/scheduled-stop-759961/config.json ...
	I1219 02:54:57.616289  195269 mustload.go:66] Loading cluster: scheduled-stop-759961
	I1219 02:54:57.616416  195269 config.go:182] Loaded profile config "scheduled-stop-759961": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-759961 -n scheduled-stop-759961
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-759961 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1219 02:54:58.011827  195434 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:54:58.012071  195434 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:54:58.012079  195434 out.go:374] Setting ErrFile to fd 2...
	I1219 02:54:58.012084  195434 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:54:58.012257  195434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:54:58.012519  195434 out.go:368] Setting JSON to false
	I1219 02:54:58.012757  195434 daemonize_unix.go:73] killing process 195303 as it is an old scheduled stop
	I1219 02:54:58.012874  195434 mustload.go:66] Loading cluster: scheduled-stop-759961
	I1219 02:54:58.013358  195434 config.go:182] Loaded profile config "scheduled-stop-759961": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:54:58.013431  195434 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/scheduled-stop-759961/config.json ...
	I1219 02:54:58.013603  195434 mustload.go:66] Loading cluster: scheduled-stop-759961
	I1219 02:54:58.013717  195434 config.go:182] Loaded profile config "scheduled-stop-759961": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1219 02:54:58.017688    8536 retry.go:31] will retry after 72.17µs: open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/scheduled-stop-759961/pid: no such file or directory
I1219 02:54:58.018847    8536 retry.go:31] will retry after 110.238µs: open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/scheduled-stop-759961/pid: no such file or directory
I1219 02:54:58.020014    8536 retry.go:31] will retry after 201.572µs: open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/scheduled-stop-759961/pid: no such file or directory
I1219 02:54:58.021139    8536 retry.go:31] will retry after 338.045µs: open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/scheduled-stop-759961/pid: no such file or directory
I1219 02:54:58.022277    8536 retry.go:31] will retry after 375.448µs: open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/scheduled-stop-759961/pid: no such file or directory
I1219 02:54:58.023398    8536 retry.go:31] will retry after 727.319µs: open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/scheduled-stop-759961/pid: no such file or directory
I1219 02:54:58.024523    8536 retry.go:31] will retry after 749.433µs: open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/scheduled-stop-759961/pid: no such file or directory
I1219 02:54:58.025655    8536 retry.go:31] will retry after 1.886104ms: open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/scheduled-stop-759961/pid: no such file or directory
I1219 02:54:58.027822    8536 retry.go:31] will retry after 3.345959ms: open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/scheduled-stop-759961/pid: no such file or directory
I1219 02:54:58.032027    8536 retry.go:31] will retry after 3.641082ms: open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/scheduled-stop-759961/pid: no such file or directory
I1219 02:54:58.036295    8536 retry.go:31] will retry after 6.72775ms: open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/scheduled-stop-759961/pid: no such file or directory
I1219 02:54:58.043535    8536 retry.go:31] will retry after 7.415679ms: open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/scheduled-stop-759961/pid: no such file or directory
I1219 02:54:58.051787    8536 retry.go:31] will retry after 14.750123ms: open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/scheduled-stop-759961/pid: no such file or directory
I1219 02:54:58.067104    8536 retry.go:31] will retry after 16.615081ms: open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/scheduled-stop-759961/pid: no such file or directory
I1219 02:54:58.084395    8536 retry.go:31] will retry after 29.718155ms: open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/scheduled-stop-759961/pid: no such file or directory
I1219 02:54:58.114639    8536 retry.go:31] will retry after 45.6562ms: open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/scheduled-stop-759961/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-759961 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1219 02:55:02.344854    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-759961 -n scheduled-stop-759961
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-759961
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-759961 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1219 02:55:23.924084  196126 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:55:23.924404  196126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:55:23.924415  196126 out.go:374] Setting ErrFile to fd 2...
	I1219 02:55:23.924420  196126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:55:23.924616  196126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:55:23.924901  196126 out.go:368] Setting JSON to false
	I1219 02:55:23.924989  196126 mustload.go:66] Loading cluster: scheduled-stop-759961
	I1219 02:55:23.925406  196126 config.go:182] Loaded profile config "scheduled-stop-759961": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:55:23.925475  196126 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/scheduled-stop-759961/config.json ...
	I1219 02:55:23.925695  196126 mustload.go:66] Loading cluster: scheduled-stop-759961
	I1219 02:55:23.925808  196126 config.go:182] Loaded profile config "scheduled-stop-759961": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-759961
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-759961: exit status 7 (80.952667ms)

                                                
                                                
-- stdout --
	scheduled-stop-759961
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-759961 -n scheduled-stop-759961
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-759961 -n scheduled-stop-759961: exit status 7 (77.529286ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-759961" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-759961
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-759961: (5.203852795s)
--- PASS: TestScheduledStopUnix (98.57s)

                                                
                                    
x
+
TestInsufficientStorage (9.01s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-486590 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-486590 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.480491167s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cfcf41ba-6a0b-46ce-b54c-c0a279530e89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-486590] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"92cb15fc-7374-4a2f-895d-5d28fd0e7440","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22230"}}
	{"specversion":"1.0","id":"0d3d5ed7-1416-4112-8eed-e8b75afc1604","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"677c9869-6c96-4de1-8836-7d3527b94e99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig"}}
	{"specversion":"1.0","id":"88bc5bb2-8156-49b4-8f71-cb0b89561394","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube"}}
	{"specversion":"1.0","id":"186d5d84-abdc-4ac1-9b77-56a032582fc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3cf9f808-4da3-49ae-bed2-8b8828cdd17a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0b5abecf-2aac-4502-9676-8b16ad6a74d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f2f9c7cb-e311-400a-9fea-6f648cec902d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a3778492-8a05-42f0-a95a-80e3cd245d36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9387d64b-38ed-4337-bc49-64eab4a627eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b128171e-3bc1-451a-af90-5a97777d80a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-486590\" primary control-plane node in \"insufficient-storage-486590\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"71479bb8-4503-4bd3-ab77-1b7df063a23b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765966054-22186 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"cbb76a76-8004-4c78-89b0-e93ac4b52b63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0c46c722-8c1f-4645-9cc8-b0fdf8428683","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-486590 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-486590 --output=json --layout=cluster: exit status 7 (299.536949ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-486590","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-486590","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1219 02:56:21.058744  198659 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-486590" does not appear in /home/jenkins/minikube-integration/22230-4987/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-486590 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-486590 --output=json --layout=cluster: exit status 7 (297.606765ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-486590","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-486590","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1219 02:56:21.356566  198771 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-486590" does not appear in /home/jenkins/minikube-integration/22230-4987/kubeconfig
	E1219 02:56:21.367074  198771 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/insufficient-storage-486590/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-486590" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-486590
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-486590: (1.927873026s)
--- PASS: TestInsufficientStorage (9.01s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (68.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.755463456 start -p running-upgrade-936726 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1219 02:57:42.700402    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.755463456 start -p running-upgrade-936726 --memory=3072 --vm-driver=docker  --container-runtime=crio: (42.58799731s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-936726 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-936726 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.310627094s)
helpers_test.go:176: Cleaning up "running-upgrade-936726" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-936726
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-936726: (4.488271386s)
--- PASS: TestRunningBinaryUpgrade (68.94s)

                                                
                                    
x
+
TestKubernetesUpgrade (304.57s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-235536 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-235536 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.194761144s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-235536
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-235536: (1.989999247s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-235536 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-235536 status --format={{.Host}}: exit status 7 (92.576454ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-235536 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-235536 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.862598993s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-235536 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-235536 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-235536 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (105.091967ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-235536] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-rc.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-235536
	    minikube start -p kubernetes-upgrade-235536 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2355362 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-235536 --kubernetes-version=v1.35.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-235536 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-235536 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.496843895s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-235536" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-235536
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-235536: (2.75209629s)
--- PASS: TestKubernetesUpgrade (304.57s)

                                                
                                    
x
+
TestMissingContainerUpgrade (64.28s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.4232950863 start -p missing-upgrade-196876 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.4232950863 start -p missing-upgrade-196876 --memory=3072 --driver=docker  --container-runtime=crio: (23.533504752s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-196876
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-196876: (1.765694651s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-196876
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-196876 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-196876 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.305761342s)
helpers_test.go:176: Cleaning up "missing-upgrade-196876" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-196876
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-196876: (2.529134028s)
--- PASS: TestMissingContainerUpgrade (64.28s)

                                                
                                    
x
+
TestPause/serial/Start (47.66s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-211152 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1219 02:56:25.393244    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:56:38.531188    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-211152 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (47.663173734s)
--- PASS: TestPause/serial/Start (47.66s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.03s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-211152 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-211152 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.020759202s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-148997 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-148997 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (76.131572ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-148997] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (25.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-148997 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-148997 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.323574653s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-148997 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (25.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-821749 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-821749 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (198.21487ms)

                                                
                                                
-- stdout --
	* [false-821749] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:57:30.203637  219278 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:57:30.203801  219278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:57:30.203813  219278 out.go:374] Setting ErrFile to fd 2...
	I1219 02:57:30.203820  219278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:57:30.204152  219278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-4987/.minikube/bin
	I1219 02:57:30.204764  219278 out.go:368] Setting JSON to false
	I1219 02:57:30.205931  219278 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2401,"bootTime":1766110649,"procs":275,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:57:30.205993  219278 start.go:143] virtualization: kvm guest
	I1219 02:57:30.207604  219278 out.go:179] * [false-821749] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:57:30.209089  219278 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:57:30.209087  219278 notify.go:221] Checking for updates...
	I1219 02:57:30.211504  219278 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:57:30.212565  219278 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-4987/kubeconfig
	I1219 02:57:30.213669  219278 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-4987/.minikube
	I1219 02:57:30.215169  219278 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:57:30.216466  219278 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:57:30.218488  219278 config.go:182] Loaded profile config "NoKubernetes-148997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:57:30.218658  219278 config.go:182] Loaded profile config "cert-expiration-254196": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:57:30.218812  219278 config.go:182] Loaded profile config "cert-options-351999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1219 02:57:30.219535  219278 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:57:30.249112  219278 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1219 02:57:30.249296  219278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1219 02:57:30.316332  219278 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:56 OomKillDisable:false NGoroutines:71 SystemTime:2025-12-19 02:57:30.30550612 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1219 02:57:30.316436  219278 docker.go:319] overlay module found
	I1219 02:57:30.318239  219278 out.go:179] * Using the docker driver based on user configuration
	I1219 02:57:30.320136  219278 start.go:309] selected driver: docker
	I1219 02:57:30.320152  219278 start.go:928] validating driver "docker" against <nil>
	I1219 02:57:30.320164  219278 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:57:30.322162  219278 out.go:203] 
	W1219 02:57:30.323804  219278 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1219 02:57:30.324721  219278 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-821749 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-821749

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-821749

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-821749

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-821749

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-821749

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-821749

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-821749

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-821749

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-821749

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-821749

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-821749

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-821749" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-821749" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Dec 2025 02:56:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-254196
contexts:
- context:
cluster: cert-expiration-254196
extensions:
- extension:
last-update: Fri, 19 Dec 2025 02:56:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-254196
name: cert-expiration-254196
current-context: ""
kind: Config
users:
- name: cert-expiration-254196
user:
client-certificate: /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/cert-expiration-254196/client.crt
client-key: /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/cert-expiration-254196/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-821749

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821749"

                                                
                                                
----------------------- debugLogs end: false-821749 [took: 4.838550802s] --------------------------------
helpers_test.go:176: Cleaning up "false-821749" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-821749
--- PASS: TestNetworkPlugins/group/false (5.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-148997 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-148997 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.028732604s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-148997 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-148997 status -o json: exit status 2 (357.512348ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-148997","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-148997
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-148997: (3.473762603s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-148997 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-148997 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.438221706s)
--- PASS: TestNoKubernetes/serial/Start (9.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22230-4987/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-148997 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-148997 "sudo systemctl is-active --quiet service kubelet": exit status 1 (324.879639ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (2.756998747s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (3.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-148997
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-148997: (1.330541894s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-148997 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-148997 --driver=docker  --container-runtime=crio: (6.901220143s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-148997 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-148997 "sudo systemctl is-active --quiet service kubelet": exit status 1 (301.65767ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (285.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2124824171 start -p stopped-upgrade-878125 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2124824171 start -p stopped-upgrade-878125 --memory=3072 --vm-driver=docker  --container-runtime=crio: (21.97802911s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2124824171 -p stopped-upgrade-878125 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2124824171 -p stopped-upgrade-878125 stop: (2.943427679s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-878125 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-878125 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m20.24309162s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (285.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-821749 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-821749 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (41.443544631s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-821749 "pgrep -a kubelet"
I1219 02:59:48.594107    8536 config.go:182] Loaded profile config "auto-821749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-821749 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-8xdgp" [13a98b11-d902-4951-a1fc-35d5e967a8f4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-8xdgp" [13a98b11-d902-4951-a1fc-35d5e967a8f4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.003590092s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-821749 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-821749 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-821749 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-821749 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1219 03:00:02.347875    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-821749 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (51.389924924s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (43.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-821749 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-821749 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (43.52994189s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (43.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-lwgkl" [da2a1747-1a22-4f36-aa70-8a46d80fc588] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005529008s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-821749 "pgrep -a kubelet"
I1219 03:00:58.935697    8536 config.go:182] Loaded profile config "flannel-821749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-821749 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-r5j8g" [90ee75db-55f1-46fd-ae85-328d078ca2e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-r5j8g" [90ee75db-55f1-46fd-ae85-328d078ca2e3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.003835228s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-bjhtp" [0413b453-4fb0-4770-bddb-76e6cfd7abc5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003698478s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-821749 "pgrep -a kubelet"
I1219 03:01:06.783261    8536 config.go:182] Loaded profile config "kindnet-821749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-821749 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-4h86k" [df461719-9b43-4428-8838-727fd3b2003e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-4h86k" [df461719-9b43-4428-8838-727fd3b2003e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003517478s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-821749 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-821749 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-821749 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-821749 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-821749 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-821749 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (69.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-821749 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-821749 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m9.869832422s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (69.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (71.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-821749 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1219 03:01:38.530847    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/addons-791857/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-821749 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m11.349565572s)
--- PASS: TestNetworkPlugins/group/bridge/Start (71.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-821749 "pgrep -a kubelet"
I1219 03:02:38.092454    8536 config.go:182] Loaded profile config "enable-default-cni-821749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-821749 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-pglxt" [76f50be3-3bfe-42d5-bfb9-489f5d572e46] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-pglxt" [76f50be3-3bfe-42d5-bfb9-489f5d572e46] Running
E1219 03:02:42.699603    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-736733/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004267846s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-821749 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-821749 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-821749 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-821749 "pgrep -a kubelet"
I1219 03:02:48.316689    8536 config.go:182] Loaded profile config "bridge-821749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-821749 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-p2hnh" [82ddb593-dd84-4f5e-ba2f-84f6dd655168] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-p2hnh" [82ddb593-dd84-4f5e-ba2f-84f6dd655168] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003415842s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-821749 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-821749 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-821749 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (53.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-821749 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-821749 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (53.537857939s)
--- PASS: TestNetworkPlugins/group/calico/Start (53.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (50.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-821749 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-821749 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (50.652972579s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (50.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-878125
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-878125: (1.364967614s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (55.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-433330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-433330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (55.070460723s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (55.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-278042 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-278042 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (52.993155006s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-kxclm" [4b5313ae-5690-4c15-8785-a6d6abdf5f21] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-kxclm" [4b5313ae-5690-4c15-8785-a6d6abdf5f21] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00432869s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-821749 "pgrep -a kubelet"
I1219 03:04:05.643127    8536 config.go:182] Loaded profile config "calico-821749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-821749 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-8gxzb" [d0897cc1-b6c0-43d1-87fd-320a80bdd3aa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-8gxzb" [d0897cc1-b6c0-43d1-87fd-320a80bdd3aa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005868776s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-821749 "pgrep -a kubelet"
I1219 03:04:09.007568    8536 config.go:182] Loaded profile config "custom-flannel-821749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-821749 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-sr5kx" [cedc13fc-97b5-4fd4-93df-f83060abc1a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-sr5kx" [cedc13fc-97b5-4fd4-93df-f83060abc1a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004258209s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-821749 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-821749 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-821749 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-821749 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-821749 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-821749 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)
E1219 03:24:09.207751    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/custom-flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:11.530601    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/bridge-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-433330 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [1b41a78a-e73b-4f8e-8857-c9e0e83de64f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [1b41a78a-e73b-4f8e-8857-c9e0e83de64f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003879792s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-433330 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-278042 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [63c824bf-6272-44c8-8874-48b3d0245b2f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [63c824bf-6272-44c8-8874-48b3d0245b2f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004139034s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-278042 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (40.712081439s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (42.497171334s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-433330 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-433330 --alsologtostderr -v=3: (16.107854268s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-278042 --alsologtostderr -v=3
E1219 03:04:48.777856    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:48.783152    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:48.793460    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:48.813772    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:48.854761    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:48.935125    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:49.095970    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:49.416376    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:50.056742    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:51.337252    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:53.901838    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:04:59.022931    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-278042 --alsologtostderr -v=3: (16.340387598s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-433330 -n old-k8s-version-433330
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-433330 -n old-k8s-version-433330: exit status 7 (91.543925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-433330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-433330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-433330 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (45.917905427s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-433330 -n old-k8s-version-433330
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-278042 -n no-preload-278042
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-278042 -n no-preload-278042: exit status 7 (94.134293ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-278042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-278042 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1219 03:05:02.345203    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/functional-382801/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:05:09.263883    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-278042 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (48.82212503s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-278042 -n no-preload-278042
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-805185 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [772c026a-4fb2-41ec-a206-d9daf7200d65] Pending
helpers_test.go:353: "busybox" [772c026a-4fb2-41ec-a206-d9daf7200d65] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [772c026a-4fb2-41ec-a206-d9daf7200d65] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.00443087s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-805185 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-717222 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a9f35053-e166-41af-99cf-2a293efdd88e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a9f35053-e166-41af-99cf-2a293efdd88e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003768979s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-717222 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-805185 --alsologtostderr -v=3
E1219 03:05:29.744974    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-805185 --alsologtostderr -v=3: (16.563770441s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-717222 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-717222 --alsologtostderr -v=3: (16.796844502s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-805185 -n embed-certs-805185
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-805185 -n embed-certs-805185: exit status 7 (79.945087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-805185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (46.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-805185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (46.312512107s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-805185 -n embed-certs-805185
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (46.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-717222 -n default-k8s-diff-port-717222
E1219 03:05:52.714011    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:05:52.794923    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-717222 -n default-k8s-diff-port-717222: exit status 7 (127.181871ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-717222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1219 03:05:52.955177    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
E1219 03:05:53.276255    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:05:53.917252    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:05:55.198363    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:05:57.758543    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:06:00.492295    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/kindnet-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:06:00.497822    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/kindnet-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:06:00.508263    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/kindnet-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:06:00.529141    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/kindnet-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:06:00.569693    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/kindnet-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:06:00.650236    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/kindnet-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:06:00.810755    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/kindnet-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:06:01.131643    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/kindnet-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:06:01.771851    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/kindnet-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:06:02.879223    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:06:03.054112    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/kindnet-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:06:05.614503    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/kindnet-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:06:10.705940    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/auto-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:06:10.735293    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/kindnet-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:06:13.119919    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/flannel-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:06:20.975552    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/kindnet-821749/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-717222 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (45.863814562s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-717222 -n default-k8s-diff-port-717222
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-433330 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: library/kong:3.9
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-278042 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: library/kong:3.9
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (24.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-837172 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-837172 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (24.413699649s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (24.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (18.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-837172 --alsologtostderr -v=3
E1219 03:24:31.018602    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:31.023883    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:31.034220    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:31.054546    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:31.094864    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:31.175226    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:31.335522    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:31.656556    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:32.297513    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:33.336047    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:33.341362    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:33.351650    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:33.372007    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:33.412364    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:33.492738    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:33.578164    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/old-k8s-version-433330/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:33.653378    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:24:33.973958    8536 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/no-preload-278042/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-837172 --alsologtostderr -v=3: (18.293654761s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (18.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-805185 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: library/kong:3.9
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-717222 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: library/kong:3.9
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-837172 -n newest-cni-837172
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-837172 -n newest-cni-837172: exit status 7 (98.194331ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-837172 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-837172 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-837172 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (31.111073635s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-837172 -n newest-cni-837172
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-837172 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: library/kong:3.9
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    

Test skip (34/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
155 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
156 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
370 TestNetworkPlugins/group/kubenet 3.69
378 TestNetworkPlugins/group/cilium 4.25
394 TestStartStop/group/disable-driver-mounts 0.24
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-821749 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-821749

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-821749

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-821749

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-821749

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-821749

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-821749

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-821749

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-821749

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-821749

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-821749

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-821749

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-821749" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-821749" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Dec 2025 02:56:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-254196
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Dec 2025 02:57:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: force-systemd-flag-675485
contexts:
- context:
cluster: cert-expiration-254196
extensions:
- extension:
last-update: Fri, 19 Dec 2025 02:56:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-254196
name: cert-expiration-254196
- context:
cluster: force-systemd-flag-675485
extensions:
- extension:
last-update: Fri, 19 Dec 2025 02:57:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: force-systemd-flag-675485
name: force-systemd-flag-675485
current-context: force-systemd-flag-675485
kind: Config
users:
- name: cert-expiration-254196
user:
client-certificate: /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/cert-expiration-254196/client.crt
client-key: /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/cert-expiration-254196/client.key
- name: force-systemd-flag-675485
user:
client-certificate: /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/force-systemd-flag-675485/client.crt
client-key: /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/force-systemd-flag-675485/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-821749

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821749"

                                                
                                                
----------------------- debugLogs end: kubenet-821749 [took: 3.484864251s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-821749" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-821749
--- SKIP: TestNetworkPlugins/group/kubenet (3.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-821749 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-821749

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-821749

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-821749

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-821749

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-821749

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-821749

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-821749

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-821749

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-821749

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-821749

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-821749

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-821749" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-821749

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-821749

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-821749

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-821749

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-821749" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-821749" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22230-4987/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Dec 2025 02:56:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-254196
contexts:
- context:
cluster: cert-expiration-254196
extensions:
- extension:
last-update: Fri, 19 Dec 2025 02:56:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-254196
name: cert-expiration-254196
current-context: ""
kind: Config
users:
- name: cert-expiration-254196
user:
client-certificate: /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/cert-expiration-254196/client.crt
client-key: /home/jenkins/minikube-integration/22230-4987/.minikube/profiles/cert-expiration-254196/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-821749

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-821749" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821749"

                                                
                                                
----------------------- debugLogs end: cilium-821749 [took: 4.049191887s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-821749" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-821749
--- SKIP: TestNetworkPlugins/group/cilium (4.25s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-507648" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-507648
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard